var/home/core/zuul-output/0000755000175000017500000000000015136646010014527 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136654751015506 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000262463515136654673020310 0ustar corecoreY{ikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD  >KEڤ펯_ˎ6Ϸ7+%f?長ox[o8W5ֻ!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kf^?·0* TQ0Z%bb oHIl.f/M1FJdl!و4Gf#C2lIw]BPIjfkAubTI *JB4?PxQs# `LK3@g(C U {oLtiGgz֝$,z'vǛVB} eRB0R딏]dP>Li.`|!>ڌj+ACl21E^#QDuxGvZ4c$)9ӋrYWoxCNQWs]8M%3KpNGIrND}2SRCK.(^$0^@hH9%!40Jm>*Kdg?y7|&#)3+o,2s%R>!%*XC7Ln* wCƕH#FLzsѹ Xߛk׹1{,wŻ4v+(n^RϚOGO;5p Cj·1z_j( ,"z-Ee}t(QCuˠMkmi+2z5iݸ6C~z+_Ex$\}*9h>t m2m`QɢJ[a|$ᑨj:D+ʎ; 9Gacm_jY-y`)͐o΁GWo(C U ?}aK+d&?>Y;ufʕ"uZ0EyT0: =XVy#iEW&q]#v0nFNV-9JrdK\D2s&[#bE(mV9ىN囋{V5e1߯F1>9r;:J_T{*T\hVQxi0LZD T{ /WHc&)_`i=į`PÝr JovJw`纪}PSSii4wT (Dnm_`c46A>hPr0ιӦ q:Np8>R'8::8g'h"M{qd 㦿GGk\(Rh07uB^WrN_Ŏ6W>Bߔ)bQ) <4G0 C.iTEZ{(¥:-³xlՐ0A_Fݗw)(c>bugbǎ\J;tf*H7(?PЃkLM)}?=XkLd. yK>"dgӦ{ qke5@eTR BgT9(TڢKBEV*DDQ$3gFfThmIjh}iL;R:7A}Ss8ҧ ΁weor(Ё^g׬JyU{v3Fxlţ@U5$&~ay\CJ68?%tS KK3,87'T`ɻaNhIcn#T[2XDRcm0TJ#r)٧4!)'qϷכrTMiHe1[7c(+!C[KԹҤ 0q;;xG'ʐƭ5J; 6M^ CL3EQXy0Hy[``Xm635o,j&X}6$=}0vJ{*.Jw *nacԇ&~hb[nӉ>'݌6od NN&DǭZrb5Iffe6Rh&C4F;D3T\[ bk5̕@UFB1/ z/}KXg%q3Ifq CXReQP2$TbgK ء#AZ9 K>UHkZ;oﴍ8MEDa3[p1>m`XYB[9% E*:`cBCIqC(1&b f]fNhdQvݸCVA/P_]F@?qr7@sON_}ۿ릶ytoyמseQv^sP3.sP1'Ns}d_ս=f1Jid % Jwe`40^|ǜd]z dJR-Дxq4lZ,Z[|e 'Ƙ$b2JOh k[b>¾h[;:>OM=y)֖[Sm5*_?$cjf `~ߛUIOvl/.4`P{d056 %w ^?sʫ"nK)D}O >%9r}1j#e[tRQ9*ء !ǨLJ- upƜ/4cY\[|Xs;ɾ7-<S1wg y &SL9qk;NP> ,wդjtah-j:_[;4Wg_0K>є0vNۈ/ze={< 1;/STcD,ڙ`[3XPo0TXx ZYޏ=S-ܑ2ƹڞ7կZ8m1`qAewQT*:ÊxtŨ!u}$K6tem@t):êtx: `)L`m GƂ%k1羨(zv:U!2`cV, lNdV5m$/KFS#0gLwNO6¨h}'XvوPkWn}/7d*1q* c0.$\+XND]P*84[߷Q뽃J޸8iD WPC49 *#LC ءzC"wS%'m'3ܚ|otoʉ!9:PZ"ρ5M^kVځIX%G^{;+Fi7Z(ZN~;MM/u2}ݼPݫedKAd#[ BeMP6" YǨ 0vyv?7R F"}8&q]ows!Z!C4g*8n]rMQ ;N>Sr??Ӽ]\+hSQזL +wfm#Y~!%rpWMEWMjbn(ek~iQ)à/2,?O 'h=TԫeVިO? )-1 8/%\hC(:=4< ,RmDRWfRoUJy ŗ-ܲ(4k%הrΒ]rύW -e]hx&gs7,6BxzxօoFMA['҉F=NGD4sTq1HPld=Q,DQ IJipqc2*;/!~x]y7D7@u邗`unn_ư-a9t_/.9tTo]r8-X{TMYtt =0AMUk}G9^UA,;Tt,"Dxl DfA\w; &`Ͱ٢x'H/jh7hM=~ ֟y[dI~fHIqC۶1Ik\)3 5Ķ']?SؠC"j_6Ÿ9؎]TTjm\D^x6ANbC ]tVUKe$,\ܺI `Qز@UӬ@B {~6caR!=A>\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!?xL 2ڲ]>i+m^CM&WTj7ȗE!NC6P}H`k(FUM gul)b ;2n6'k}ˍ[`-fYX_pL +1wu(#'3"fxsuҮױdy.0]?ݽb+ uV4}rdM$ѢIA$;~Lvigu+]NC5ÿ nNჶT@~ܥ 7-mU,\rXmQALglNʆ P7k%v>"WCyVtnV K`pC?fE?~fjBwU&'ᚡilRї`m] leu]+?T4v\% ;qF0qV(]pP4W =d#t ru\M{Nj.~27)p|Vn60֭l$4԰vg`i{ 6uwŇctyX{>GXg&[ņzP8_ "J~7+0_t[%XU͍ &dtO:odtRWon%*44JٵK+Woc.F3 %N%FF"HH"\$ۤ_5UWd̡bh塘ZRI&{3TUFp/:4TƳ5[۲yzz+ 4D.Ճ`!TnPFp':.4dMFN=/5ܙz,4kA<:z7y0^} "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ G׌MU.APf\M*t*vw]xo{:l[n=`smFQµtxx7/W%g!&^=SzDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq<-uˮ◶>waPcPw3``m- } vS¢=j=1 W=&;JW(7b ?Q.|K,ϩ3g)D͵Q5PBj(h<[rqTɈjM-y͢FY~p_~O5-֠kDNTͷItI1mk"@$AǏ}%S5<`d+0o,AրcbvJ2O`gA2Ȏp@DEu&ݛȘPˬ-Ő\B`xr`"F'Iٺ*DnA)yzr^!3Ír!S$,.:+d̋BʺJ#SX*8ҁW7~>oOFe-<uJQ|FZEP__gi(`0/ƍcv7go2G$ N%v$^^&Q 4AMbvvɀ1J{ڔhэK'9*W )IYO;E4z⛢79"hK{BFEmBAΛ3>IO j u߿d{=t-n3Pnef9[}=%G*9sX,¬xS&9'E&"/"ncx}"mV5tŘ:wcZ К G)]$mbXE ^ǽ8%>,0FЕ 6vAVKVCjrD25#Lrv?33Iam:xy`|Q'eű^\ơ' .gygSAixپ im41;P^azl5|JE2z=.wcMԧ ax& =`|#HQ*lS<.U׻`>ajϿ '!9MHK:9#s,jV剤C:LIeHJ"M8P,$N;a-zݸJWc :.<sR6 լ$gu4M*B(A ݖΑِ %H;S*ڳJt>$M!^*n3qESfU, Iĭb#UFJPvBgZvn aE5}~2E|=D' ܇q>8[¿yp/9Om/5|k \6xH.Z'OeCD@cq:Y~<1LٖY9# xe8g IKTQ:+Xg:*}.<M{ZH[^>m0G{ ̷hiOO|9Y"mma[sSbb'Rv&{@6; KE.a\}:<]Oyve3h9}E[kMD,5 %sO{킒 8.K?]i/`׎tp NvԻV4|<{H@#*h{Yp/E%dlh\bU:E%h@&SEK [ Ƣ xg{z%ǻViX~鮦w35QE~qp[ʕ@}ZL! Z0!A⼏q)[f &E1K3i+`JG P/EG 4 9LڑKL|`PОnG#|}qOR{Q|2_tH߫%pD?1%(@nfxOrs25rMլf{sk7݇fjӞh2HkeL'Wʿ}Ƞ%>9cSH|cEyQp 'ˢd:,v-us"Iidw>%zM@9IqrGq:&_p3õB!>9'0LL]M[lwWVR9I5YpVgtuZfG{RoZr3ٮr;wW:͋nqCRu1y=㊻Ij z[|W%q0 CJV٨3,ib{eH7 mҝ(3ɏO/̗-=OR\dIoHZ6n`R֑&#.Mv0vԬ]I˟vrK}F9X|FI#g.Gi)%!iK|o}|ֵ7!ېATJKB2Z/"BfB(gdj۸=}'),-iX'|M2roK\e5Pt:*qSH PgƉU'VKξ ,!3`˞t1Rx}fvvPXdQSg6EDT:dׁz^DjXp͇G|X5Q9K$)U?o': .,wؓaՁ_ 3]Q16ZYafuvrq^ѷQT},!H]6{Jw>%wK{)rH+"B4H7-]r}7v8|׾~Us?yWfv3>xpRҧH-EeJ~4YIozi:nq Vq8swHOzf ̙eX-4`TDGq G.tݻgq74ŠqBFf8 9Fk Afq#ϛa$!qNCJ4bnvB @W,v&- 6wCBjxk9ᤉ ,Asy3YޜZ4ΓVYf'h?kNg?҆8oC!IMo:^G10EY↘H:L@D+dˠUHs[hiҕ|֏G/G`' m5p|:9U8PZ7Yݷ/7cs=v{lLHqyXR iE^1x5/[O6rpP40ޢE_A͝ Z5 om2p)lbp/bj_d{R\' 礅_}=\:Nb{}IStgq$<$ilb)n&  $uT{wD]2cM(%YjDktByxVl巳1~jpd1O9Á%˧Byd}gs9QNʟ. /ӦxbHHAni5(~p>/O0vEWZ nY3 cU $O,iLacoW1/W=-kqb>&IL6i}^^XpCŋ݃k-$pxbڲ&6*9mg>{rtD)wQ`pkKyt1?[ˋZ5NhfӛŮ Qu8Y4?W֫/&W˸~%pqq{% ?K~,#/0'NZ׽Kq^ėSJ6#j8GO[ PCbʍN^XS&}E9OZ]'t$=tnn&nu [}Ab4 +OLuU{0fIb { Ov8FHӜ"D$aǽO8'1lfYuB!!=?8[Y|-ɬeǪzd;-s~CM>e:9[_v~\:P ؇'k01Q1jlX)/ΏL+NhBUx~Ga>Z"Q_wjTLRˀtL L+BT҂ll魳cf[L̎`;rK+S- (J[(6 b F? ZvƂcW+dˍ-m𢛲@ms~}3ɱ© R$ T5%:zZ甎܋)`ŰJ38!;NfHohVbK :S50exU}W`upHЍE_fNTU*q%bq@/5q0);F74~'*z[\M-~#aSmMÉB2Nnʇ)bAg`u2t"8U [tJYSk, "vu\h1Yhl~[mhm+F(g 6+YtHgd/}7m]Q!Mę5bR!JbV>&w6οH+NL$]p>8UU>Ѫg39Yg>OF9V?SAT~:gGt $*}aQ.Zi~%K\rfm$%ɪq(%W>*Hg>KStE)KS1z2"h%^NEN?  hxnd/)O{,:خcX1nIaJ/t4J\bƀWc-d4M^d/ ʂK0`v%"s#PCoT/*,:[4b=]N&, ,B82^WK9EHLPm))2.9ȱ  QAcBC-|$M\^B!`}M^t+C~Lb }D>{N{Vt)tpDN,FCz~$)*417l;V iэ(_,j]$9O+/Sh]ice wy\Mڗ$,DJ|lj*à␻,?XAe0bX@ h0[}BU0v']#Vo !ې: Z%ƶ(fl>'"Bg< 0^_d0Y@2!ӸfZ{Ibi/^cygwדzY'Ź$:fr;)ٔf ՠ3Kcxwg*EQU{$Sڸ3x~ 5clgSAW"X Pҿ.ظwyV}̒KX9U1>V..W%GX +Uvzg=npu{do#Vb4ra\sNC/T"*!k愨}plm@+@gSUX覽t01:)6kSL9Ug6rEr(3{ xRP8_S( $?uk| ]bP\vۗ晋cgLz2r~MMp!~~h?ljUc>rw}xxݸǻ*Wu{}M?\GSߋ2ꮺ5w"7U0)lۨB0ח*zW߬V}Z۫ܨJ<]B=\>V7¯8nq~q?A-?T_qOq?5-3 |q|w.dަ'/Lh!*<-lo_V685td=`h$fZj(ЊZzk T d5;*ߙԘ1O$ +Jp[v=؆ R1kG,:\ne^ zDpC65M'-u9r`F,IT}x{qYE$"}xkqcU0F}|䀕nwRqR9~ i±za+HFNi>. EWz:V^&YEs5Ȭ N *7aſ"c~L :4^`jSؐd 3]%ijo#^*˟ R (0\lCulɱĘʦ|k*+ŘOi`:F_Ѫ2)sCj1THɩhS-^p b~?.>, `0!E%ҏ:H =VՑӄ| Ć.lL t1]}r^nʂI-|i*'yW='W6M$oeB,޳X$I6c>EK# 15ۑO2Jh)8Vgl0v/eNEU"Ik dRu˜6Uǖ xs%P ع omWl҈sApX!^ Ɩgv{Xn|$̇d`>1Ljn떚F+B9l"UP۾u2Ja>0c0Vvގj$]p^M+f~@9{bOe@7ȱ^%u~-B竟} |23 Z.`oqD>t@N _7c$h3`lg\)[h+pHBr^J |r\8czEnv@qZbRT1e8V Scc6:$[|a.fpU`ZR֩bKgTlѩynۢ, "1LӰW&jDkM~# (C>ϭQ3{ߤ%EN;?P%ٱm -{2k 8Vbv"wŏݙmn&O1^'}plM)0\n ή ?Cֲa9H] lX9^vCο -vd+OUgRy2Я\ B0!% #>bJPUck\Ul'F瘏Y4Ew`[x٘p,>9V"R1I>bJ` UL'5m1Ԥ:t6I >jz(:W֪Ƹ)!fꠗe[XLE4atGS1px#S]MF˦NJPYDX%ܠꡗhl}i9f?q>b-E'V"mNf""ŦK9kǍ-vU #`uVi<s)/=r=nlӗЩsdLyVIUI':4^6& t,O669Ȁ,EʿkڍfC58$5?DX 4q]ll9W@/zNaZf% >Ę_"+BLu>'Ɩ=xɮ[⠋X((6I#z)2S zp&m?e8 "(O+:Y EaSD]<^(]В|Ǚ8"oRs?]\McZ0ϕ!1hKS`h0O{!L-w]ln2&0Ǚ'0=.T4G7! H/ͺ|@lX)+{{^s1V63 ۗI"*al NJ`Q8B\pup6_3XqCXznL9:{o qcuו8`n{ave=}OR9~yL Z1=W8>É R$|L ]OfJl˪VVg:lDԒ͢Zu[kWۗw{{7st08`J0ꨴU1|z:9dX)z2!S:'q9| 76"Q;D*04Zٚ ?V¼r8/G:T6Fw/ɚ~h?lUc3MکEen壹n\殸,˛_uu.Jssu/*47U0)l?R_^Uon̝f-nnZTeuu nn/*0׷տ·sHH?Et _I`[>>0ւcz/Adh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?crO e9d ljB?K_z>p%'3JQK-͗R>KkΤOq,*I|0]Sj%|-Ԟ = Ʃ%>H&t;9`>$& nIdE Ͻq*nŘʰN?mdɿ"~ %6q23Ƌx&``4Mkdx&)YղI3yxY5p\É H&dbu%pcqמlɑ L*`H0TP$[̇ki?T1y8*,< D!E\,".fWЄc n UPJp̓v. V0*^uQVfc+0+n/OyI?1 U`*7AF(SV^Ez2(S c; U\A|tUÈG8C~xAGD$zb?t jLpiS!oKp0 vz+$|OKoIي=ݒnݰCe.b26EBuеT24Cە%~d]Tyq"7N( i𺨲]"&Ќ,¢=hcPXa @Qsٞ 'J,0w-뢾v<)FՄO׫OãnyiZ?Y%?E>SbUbCi|$?+Kb[]x-<}wP!2( gUO-拀xt!hVUyvrOvEW9%8KߒZW<7NFQVp8U2~1wy=_RU%=Ρ?K ? 32koYTԼ~xW$g `!EuU wB V3K?O0Z-c{@!8b~A<|9l8RG'OyCͣF?<.<Av3jjO^NH+_(̆|'&YاEQGE?焠92e8G9X;FO2HDGNybْZlO_s-VE+i‰+mgT2Owo)Ly;ʣ.PH9HՕ< 5s KwFscM~-:HyYPv;E gg@~e Рsd!1 qh7Aa G'8 v̧uRGE8;Q[s *6327 'NpQ$Xף Fq⸐[̓s pwVC"]#:+>½;+g>ztVG-x D4] U6WeU(T0J> rCO>:(kYdMY"ϛZq;Bu8A>h_ȩ CӶ)~n[P L葪vAX0p ;kBķRqEOS>MurpxBw7?qΥmˇvmFvq!\}%>s.k^,7!GF>WL^xNPN4Wq@1Gjƺѫq`\E4ZYDJJ;4<NوEӋ}x0ɋ,V\dRTde,pPd5O>aC54SeE\qFsSՂlPqxоp`V@ 8(bn4_s($UfQ4`6Bs\Ɨ5D,w[`%g4\ooAup(B}:C.qw.\qR[k/[U)NZ%Ɂ'ai4s-@|?{@D=WeN䊪'pm_YyM$}궣SymQi|Q=C[Z*Cg87+\ZL18UX rG"K-ih̨fpogYN%}bm 8ɮ7Z"]zK1іSI}hhf"Rߡ[IAu L|w[}.m-a%L@7u$έۨK߯ TH4C"*f+ 7s(ȕYWXAf?&W X 2ʘ-kmE|A]}BJs~@&RuD}!;iɲ2"Aj1FTo a: ORHElvK,;Sp@u@VӃjN6 TmkBg!Ȋ53dݕ}v(၇LqQ6ۄnktSJ&  ,R"Kތmj,ᛧ6{|>>X0C'[S6#ќ$:&5[WrݵDf(4es UEE5AZa DÃ/5mdk)hF$ scMbr-{kۮYqߦ`N@lǍ,4bV=b5 l;}.EL+o_DBhi_/5KkV`CJVvl=àrK2IQ!ڀnӹ12D-c%+{6QKEUCF&k@jC! d[,ÙuҶLc*Q{]h"6؁%kS8`T+ci^;DٖZ 6eɢ)+ut?֘M -Jyzѥ"{Qݎ*:PjzWIe\BX*V؆C3C;S/!0}M^ac uYLi.$!qG G02g|c]mE`Y&ދތÃP.xuPŁu ]Ah%/uqX)g"u{xNZr`v8DSiD2]tVަF :^5v0whgr.*횧Mk^pK5_xjSʲXMf2:n<#ӳjN$+[P(py;޸pj<*˳$Duu?ʨ{V2T*-s g XI-zbDtͭ>mDᦻna ŗØ(3}IbdWMbzVX)=H7P(cGLVf"DH̴Lg/g11k~ecIf"&A dlde󧯅 Xx~w 3X4=ڻt8\+/A,#w##}0=o{c^>bfY^icbdf9cbKs'56Agifb6"R*`X5mFg6B+>X]gQ8:ٮ0se.nb;Mᄇوe}jg]PʍEƈﺘ9@߅̱=ۚ(K/Yq#BAnݿݥ(eqJ@޵ml2 p{C3rHF:I:v&0(Hx~=gFJ:bm&;g^g΋>k4i)]X ag;|OK}7 }!Z{y b^Z黊q<}SĎ֧`4R7I'Pg/o~HEU>\M8hφlfGg݊4WA ~*F1x+# Kׇ7?o0I z{D`EVmՈn+9JTݹ#6 rO '씖!ysΑ݋Nqݦ;q1P`PH ^.$3.CB /cMh$ .uH@;Nʈ:1GIuI/ߏ`n4PCV$ qXLi'B$#LЍ@4.JD^G%.g0NJ(_šUu;w"(N\%(#-vr֏0ps% J߈K05Ha7 v1/D&4-ۥ lgj8Ž#(v#Jb;< G K":󆠺,Cgf(N2"A$>t#[b w&zPx"mٿ\z>~hGV_?4iMZi#:bl;`ryn|;& ǣҜnEWL^Wt1g,픣QXm~w2*gA 4m[+5LP$hלUQeE> p O%P`וvEa[4N C0Xwe"t=L0SQ 8h3(XPp@axҪEjO=-aЮp /S/\#D0oٝVw%703~ +F@Ldr&-< n`.i7xMpG-g^ʙIpt.[l^& ;1'0uaԮgK9| Ҏ(ag"q&0&D)A7*(BCf"B9ɀU\>A`d9 BQ[ 4K2,1wq~ ?7ۉ丝(R{ll ຎQmtCYʻT'g1&r)̫2; lE\ {^VBŽpfP&_&xY(^oe a0qx@/!WN^-K8YYl u\∈z^>wYۄ 6eZ…$^uQrm$Uu~p]r12́N{rrL~} ؛rrJ@ّլBQfIIrƽ>ެ葔Opp5MJޔe`<pB[1Ku~xĤo&F_rݪ2E-"I<Q6Ws͒l6~9 p7S]G>Џ/>)zI]yگ>ZIA dJܠGan9l5B:*A"{dt f2}.o01|%9[K"0'N.N%} +y3Ê &"<#O `%fܦˌon6RޣVxʗGX`vևm@>loe 嫵FA(<[m 5:hd]GYYǴe &s&k,# U^b }vfPD\_aܥ'Wf(O_Gs?`H^w A.o*8F3|t)tVw2?9E֙< x)^p?>H>Dzv:WmZu[lyq $ os4Gşrh3%A[184$7!4pX$YB.F[ Txd03j̳7> 4bPvsr/S3 tÁφCj R:CQO9ZլձpDf]nQuɿ~Wz`OZ/ `SXSN鞘ޓI;9].6 mGuzO0oU౐/(E[z-i`;܉sr+ܭD=qWaӵ>FY<{guStJ~V1hyqfhRL& Q$vWq3`"FWܲfznMU8*)T #* Z/qҫMu UƪpIW[^8E9t6tcmDa 6X;s8얮e#FN0w.mxҀG^OC'ǪZÉ8Lр(<|(\%`So\lRGtɇ[/[]80eBˠ+ꦢ,^MP[$m @܁׆.TTx^~_2s~k_JW ;Jw'>P'tB׆2tAUҝu}`w}c2lZlwZHe{Rv ]%݁PwwBI|P|wB{w [%ہPowBI*?POB*GؓP*<`OB W w 4ܝy{n tI63P K }Ah󤸟VBhLGU'(LcՃ]^%N0Gm1@`ީ*R= NQaʵM_ ǔ vWx #g=LȒQVC}>@.uo A[e³!|k6|zef ^ <4=2hT[ S,tV/̚GEz/8;.|ؘKVGĮj5~,,gM9_K1;=\Sdq8H<*Rhr躎cJ%Le _ r+ 9_LcUV(J=6w|iMX(dt: "ʇR'ɮf/W9C/A1Te,6\)Jʚ4Xp) rZ@q<AcZDR=L)CUf8M0sW,0:d3Xdk^x&!:"Zr~í-sͷLa|ɔW 0T (]4.'r")& Ѱf<}K2X]ӂI?-q̗Z};]h{ټWE{pgi Jo!].,|Uh<:\ԉe6ɽH9AOv =do~b \?PfdzPND6h>)\XJ?|^f NO=T1} u9B!suS3~ʁ&0z킿r5j ߒ<=J_lvm{NZhC{zFxD~_C D#B7pXMKXRZ;͊yIm6p`l"Rif QB4*ar4+t^*"Dlu3~p-c% KȘS2pܶy5A毀 "ωQkѭ}Ik+"zy"S 6Ja}N#2ie6fyP¬S46F{`# z!7תdmZg.r>1PqGme[1G P5 v5rtR|'58>3eN:4H5|=>ZqgZ7G[Ri$=uAe!Q;XKBk%Nf֭`e6CK}E <4NNyT*/DT4s s{3.6%\MV/KBVJ۲˚"&=|cѦ.r7]/,M-RL6F&PlDD[6:N.N-|LvGϝ0OilcEd.Py9S.iiV^s-׍,&S,i&2V܃3e-b¶/jY6}Q 4wt@IQ+2ke1)h0[8.?cֲ  sORTT0+p"S0RVCEzdJy0kosC{zwmm&~EXl6 `.y&@_%(CҒTSȈv ee*;:U]u |t6r͖\ 4X7i߾9 i!Emb\UřCfS=(`as cuV8Xm҂AJFCA*9 x1)fz/û)♃f΃I3AЅjJ&~ D<(`3[=Xؒt]``ݮߌikVti}KwȡpZd*ZBa 2¢oJ)`vΚ,.8 PG WHơ#?=ldk{ϲmY Yp#[2=B%H ,,C`*]5x,.\?N դ%m3S,AlMXX.K0&ͤO HBj٨|0Ӑb4- fi,+*>}r6qVH1@m`%20&%!G#1-E QL %J!A2,7O4UMÑb%]&!fia1VrWJ_5cG(cEn?3?W_Q,FnPK/C[ W28'1Zڒ bfW"]쵓ՇUZO!Lv[F4CLyM P-A+Hi[:Տ}p4~9p7ߡg| 幢]pTb(5 VA-9pUE)`;sR zXخ͟ eݝɼ^zNFPҶlpV!5:lmP,킇^o~,BWviGitG2 bڴɶFE1M|[Xp_IZXVɱQ4XM`xA\ 4)fZ· ӧ~(ι%x;ңWQ1؊/OSg?o6/x1ov˲ f슬9 SMmwr$X![#z."n6iYUKYǦՋ-{uҌj-qK2sMq?k.^m)_Lbl!\X1nD'D V%3SI#'1D˱E IVFr*AG1n=99d4@߲f>ԜR9(4M*reDbx.vtÖ)XZ@!]bŧE,k6pX:v]?.8jwVu{I/q\9 _Y^j<@ywV =&% k嫭\A hlUg`{_jp+9J Y܁52lusZ(V lguAjle(]m ]ҮW.$c5DƃE2$ڱ% y^q}5 kJY#.r],EǛ.8r{V{BݸOi}c[Q0r ZO8Ƨ,{0R#>o'pKRxƩ)}J Kg7=ngUJH_2Ǹ_()RP)p\4]pL O d^(1Q[zĐtTfwOЃ k494tSr:p(\!Ḳ7"&%-z/66]4Ȥ[G.D/UpөPG[-6t&Ucqmh ]{Dkwܫ*$%Y v ԠKZq ҁ|{XtɲJ6ցh!tfEb-zpCT$$, rOCZjf2{a]\p>HNS<0N.`8U7(Oc7ngJހpC*~&K0Y۸)ⲄUZ׺ iYI')ɵL e^"k+i-DS"bi铎QJ\1_:F j(B3|rj;R}h2]Fw5fctԃӪ>icjv' huھ^nn} coAA7p+j/&ҟ?\gogXR<UJ j5I D_9]pWe4>B\ĩrKG8 `|*MMq-)PO%7~sW~C`9&jr$Rff5\w4cۍwb@(?{w8npIx'kX̧ԩ5 v~ Ď'`S$R1mZvgB knBFn=DBbhb !.@uVz .H:iMGyt,=OU<>J| Yd*Z")H7Xi". U~Edq%Opr' n?_ǨC-ij?R?wn=ä5#0zWu*IKRf{Y Qn!LP?lC4i&l>φ*ZGV 7/uEKq0=])S6߳taF(&s6;]8>[*o-wxWENtDVϫNfWA)ٗNzrov1#hR]Xc' Q 1ݪ#˂\Η]:0)/7_]p45KdOM(9,qt"`cB`A'M1s*S,gŽ]pաǑԤ. qV={A魧sp'Qhj\("s Lw7" m:5uMs}rp+{ \&.$FrqVǁ%xIb̍Y}:/t子Ŵ5bCdLf5QboO GvX䩴YıwU)>h %ƨKھΞüܑ(⣩(<ѻ%,rKTZX:%)eG[ڛ/ yZ@\nz:ZLŨ0 rlD52 wֱ՛cͻp|7p ~_\\ŵ֐վ{kbcd~o`m_?, s4ן|.+K]~N׾-j[j䟖`<k㩡v!d&a71,M}f>X! 0L%-M`ER@>``2HmN]m+سViڤY,Δ2#l)^r-SgU%߹*ٗ;7V?vqfFrCM. c*\;ڢqQZ9$)cUx_YTxaJIŀcVnSaWQCJjP1jgIZXb%ovN{9)_|qkPWPJf)yV_xWolG \ U ȊLE69|1Yjr`&"8 nO7]Í|1켴64EkٻmU8>eEt\/smO^h:c^v4,Hr"ev&#Y ,X. 3c uq7iG{=\o o{[6AT~+'' HjW:|I9 A.\R@Sx\t'TIeDmZt}KUvr{u8H|#!P[[f',q2pϤPmҊʓq%nӈmScơq&x]AY&\)ln {U37WN~?621q&3 fU'tG+vн,doy#8ru'JlPJ~5\/S{VpqSRk:-7nÊ͛2}m$@PMySBI`aY4~lG*ZU*jI!0cl|&&Ğ86(sf#oB fM4lYE7fgFhZn0EfёZI3eF9nOX^ ɥuxf_^\O}ɇxr5l}30o~)a$_0$?$vaY] =K/|N~$_ x8a;Vٵhe6( ۫E\o3!X]/ hsےO|5NRQ#G[lg/̵m1dž/Aє;C9}^ Q[c/}r44\nzG \K<)4xx~8{-]gv4ٓʪMuEL^fοmAodbrtd *퉲ji1ȇONsIn@e^1&}O^1KGaۿ_cL nW7|lWVA΂\7L<ʨf (AH}ɫ<`p>-&oρ{G.g{:2z L&'Ό&o h0zGeVA hn/qms&'Q>y//_^=x ~Zx 6P酇4̄xI`EjO%^^XNF_ǘ/OM Z ZMTƈwxj0L7ׯ}}X-Uo.Y(yv ]ZF;~9'߱3Y Ghs4,| 0pM"0@7bf;IZ1rU-$[R'fmB پ#Sg/3+דW:H)ڙp*9 Cp^xƠ9q>8&02wx>vL J+c2jɽYjJL\ "s/-27).(0zhUFMĮzJOm8K?+DzvՄG),7 #㍽fpyZf].2 ZtWϽN iv~puc+db<|/yk(OmZ!QVG- I܌dbY:3棍I]IOTeD I ]Wqm@WT%\HUP"T{TeLKmhA kzXs C HV)ihV6-p:sBݠ(jnPEΥ@-E / XDv -Qm,, [Ʒ<Gm$VUP"&/C"ҢC{.z[0$ЮHB͂Jvuʭӈnw)O3bBN|S/B$=g&.q8y_zT2;blf; Ta<0XWM(tBn}jzѤ^rsH{gzgG ~,]Ev# j T<+/7.OCP.v@ E)^|7_hQ?WaW~YOSΨ#J e&۩:& Y0_p)-AK5Wnc//֨{U‚}81j#=>~B䕛:*+1ޱNC,hl|dŶP Q?oۭ|?+nBߛ+k,yKD8Fh; d̛ðHcY7B9;p宅ч{Av-tY]Rw^?u<jQ}{rjK~Rsу - HoZ{KJaڞΛi1,,CZnv|r%1g}PF~E+Чp7_L8BD>g+2ñ$*Hӥd`TRI@ofێ0&@ Iv kPK& uE-zmUemzǰG'b950[qOy2)Z6'#gOl;B0/e .9u|x}؃s(˫Ozf2ʓZ#R*.5_w#_7 I7ƤNz9<ɦ\--Ɏ]qc^bͬlEDKoIη$W;]َڸWghZUzWg`f0CV13~ f F]`v=Q+F;_k!UdsDZ&&/,y[o-4l0~tنjt7\"dR3I>Sg`MF=3WԆ;|#n!>,$\.!~Z:8OHʋ(W?=[1_diI-&Iy9bN̊if >:8;c4y:QWTyiJkߕͼ 1~6v;Ƹ%n3h;svrIL\f)SįhjrޥY fis&PV{*81A79QZ]wuQNPZENPڣ$N,;R,a:%RRaҜe+Zilj@#dJS o,Gg80wd\]6u4OWh[(ͧ#-|]i>$!GhܻoG-U-e+LʤX3f95IqSiJX 0XHrʌ3uwZ9J Mt9nYJsV:EJRLXnrg9بwJ-3$2!r@1ө[" Ec*PF* w h}"'922\pS FR!NϼF,VsmRSPMZ$f.zf6!kpqDiS)%4v Xnj-  _уH^j#VGI|J3튲kKAnj/% 6I`"T8.P @DžT~7=}Ф[&Od.7Wi|7 ʧO  )&Bʢ$X!;? %U}3uw.+o/]n`ݧ9ybgh;upϫğL}^^v?ïf_S=/㪤Jӓa4nu?x2"?rn[ՉfQ, sHdv_¸CIq7r;~tc?9 ỗn4 @=}^CH0ygdaώ΀%*9;jwGn66A)}X"_m\ɿ"wW90P-]\m\S,(J8vֲf7d[NIJlWH"Ùp9nF3+'-D; Pހ%@b8JSa'a3ا_hLG[=鎓z2TF'XOSd (@?l%Mwe{ RDW=z'^Asjܗ\pqeSҪ(|f\J0ݘVEěPڈD:#pShr})JI 1`B4MiO'"#քNN=0+GBs2Y$ N'#nǜF B%5H #LOnQI>I0%J_/]zҖsv@K 04p% \.IWC 4ҥ֥ӎBvj3pTvgltpzuSc:34? p>eP"HPoSlm7K?}Iwai!4R;wg"l96QqT4e*ŞYE#"%$}E*,ʳ$IRW̤p25* ̆<}c )Ŏ4(!~CЩ؄jSJtb4 g頱t.M':+="fGb;zRV')(oҼ*%N" ]N0D1@%KmCRqRє&pQ-&#kwiL*N>U&qaKM' ?xԽl-] L*퇬ܺ[Zܶo'd[Ҡ4(*# @g=4`$hWP5;hԖG 85Pt[8.U]w3F W͸'HB> PSo79\ZJU喝CyzҲh^=05h0hv=uۋ_p)B- =F>r$ (27i`NI#_-Yܶ˺o-Zq @5І"73B< =(S>MTt?8df#[-C-Cz}`{h2n =Eo<Yr M%Ό\ D/q5 b5Hn`=zDFLve=L9^XQ ,C&Qn1[Q pZ>R0V{5ᶽzsV3)> Cae`UpwQ[.S8= 9(Qj7F[R|SV=J6:5'.7-Y]604Ețݙ'|}*hLxlH;3lzsYw=Ұ,Fi}P 3g73EH?a~P22>*\ݞ^oыW5"% &9!%EAeӟFF=_kF?qeuJ&)0Ǭp_M=k[9E/؋9eڎ oLvq؎,#j.iCr}iɽ5 ʥ"Damwf)q@VtOɋs¡6q0l@=4YVKUBIя 4 W6o ̨>fSL~@v@Qmp.UH Tm6T6VgVS={0k5TWNeo(O)B~<> N,BQ%t۲v=ιysrVwE9l6wc]Fؠ3'xz:Y!<OQ? %âp7f<." @+ht>{ϋ3kM˼+xì r\5N;J4eS] `= m{|=0~6~s;Φ9W3 z 2ݱEFt1Vzv1v Fm;_JRAث^ԁ:[M 墺4itB &chcW(v^W` tmm*~zؓ'! 9sW*e<~$ď>J6rݗk:%㡟[FWã݊OK-:trm7n`.Gl8}V,Nni^`] i`5."kw?H~ v|,6d:@$M\L»qK_g]E$E|,ȀnG$AJ (=BŪ ^U'-V}{d~~(~c+4l(>Ke]21-X!&_AK"*QyJwţPa ~}]ލy8&LFƞ~Ghu9A nϝ"p{NށbDepai'9IēQb-9Rp 3 J; ,M:E14"ƥ1AZY˜lTWfc/&XvYE"HHvB3 =fi,q 'e$^5_\]e ǩLZzbޝ4 hx:̚6L x_ }%p灗x<ɴZGwqvd/[̷yA~Ď(ŚioALG ef'RN5'Q9,o}=2m*-;͙cmWi .rv1*+Wn]Cpu;w\uqq8v~Of~P 8d)3`ixebp8c^RLBo/ϢeOCą >学~1؏|ghq}V|Iᦗlp}(-c!ljeif>S8 /%ToFVd,FQC?P>/.5$;|H] &3c-xSU!TQcfZZN/Rǝ;PjgW81,XF#lpJ/lH9,NKIT'8ex` FA.BWTߦ )4< kW |ӏ) /y׋կC [P#Xe}FףQq!3`CAVHB?gY1YaM_HgD/wx6a^[c V& >8K,\4Rܦ$rLHc8^sA1pDx@g'eTL`!GrT*Lꨔ0OQipߑakؑh)ԳJ|jV-*| ]*QwDP>cAxÜz="*pLZP^,a"ƠC J{bHTu$I8iu#ۚfxښOM7uD%|]OAK/m1Qi-LIq0K hYy%9y\v4;V O&6"6 }&u#70:QP.pLjd2gGK-OpLmiι#fD`lwr=*L|(w%r#+Y*_I"JL2"c1 D X% 8XJQo!vQLzO=` hVȡA?1 z7ώ+7f$˻$k$}۱[E0avE/ʮfh`HdUU kA]_e-GW%,F"5˵!9( tc͹V^DV_ju2&gnHB\g ē\xsZDSU2snZ텶_.ZnXX8DL!N .y~\|M4K 厓~⾂t{QeOhO hƝSeFv-̿߇_ <x:07OkN 5/5bdG7Jzyz`)>j*Ày/܊04)=G"a=-ɷ]*ZGtwK 6<. c] =IKPZU7CQ[`cAij KlZٜO0Kh6\oCY;BFa0U[c}U'4mk?-t?~ Ioso|~q$'|0iπ;Gwz~1𪿭fFvG,f ̓2?0h9BL&@.Fy?7cFن/G%np0oG#noޖQ#.Z9]ð2 &?z}Ӕ˛N]B0BN&aVyn,aPdmwYձhd<$lXf}"@VG  2!͸J=qvd9=P 4x?f34A\ɾl:ɹj]AL²i$2+,Gq<'9" \̗a$00U"m~Z7Mek=*[>Wͪj7&oGQ|Tc.CZ߂E!0% TC׋_p雄Lp-w5m9>nc^tOGӄG9GGOP=˳X._| Kp-GbS;f;$DNБF5^z0?}w /8^)Z$m*Iӛ'UxZYDϟ &IRoɣ%!}d 0 !.agUZq=&Ǩc#$CA-}YF%<b7N"WL<YG,Q 12tKn4>V"/yyGk"{y3 ,!NB[`mIˆl;!Z 8O/Wi2Oξ~wXR&TznsJ-%^dV.#~s98چq9ZF \|;eiی2ݛl/S%4Q0q["< H җ })#/LfPnQ#NC:s^+Ǎy Ȼ\˓}\aA畱y(Ojc% w!{(?C}0+xI ƽGӸwy7ۻ0~vU^TtL!!,CCFրiD( Z\MCe]Eqo"]r^y?wp˙cXN2" cRK +x.hߴE]E-˿g8D'UU i*$-z`oaJȻJt$"(?Y8 `6YqϷ%L< Ȼ̅%oY']Qlr==Bb8ɻQHAF7F<ȾjJ!ONSbZ2v?y)P6T$(Zf ,$VtoJ%m".#*b#{}ͭ } CYT(wK9'CrF!G<``ɷÒ*"hOȍ'vI:1`ēH󐐈RcAq_MXDUtv>.،N n4kDNhDje}Wv."(rI wxyOq=22`0:Ɇy.h4E])!Xnf'!U2> 9/*ML^( Aie!PZXGCe]Eޣ"[26|LӔkOCO&7)wq[{w;ۭ$`/*3 x(=Hh>:*CD 0P~I s^BQκSg{^\C8.F<.Pd^cRnڒ& iw<D| o,Mx /"n3Omƴ+k| P@\aWD|6tx<=kQ 3txO6Km*oMEĥTepWG c%G$zl~ Tpҕ ])#+=^sx-LE0[[5Dc܂Ow&닦o_tUa6 \A iHkδIIfN1_ė_</ׯǛgƳ+$!2DIiCkp4L$໮ +h׬ +"*ZS8Spf{ ̈o3<0:#A;e h*8>1d;{ :5aQl ˌ)|̲w 5WsyWr/vz=})Sf>܃ dٟɘO*;#T:8Ȳ\yVRTH6ufq33QDUȺ?0^Og9,y!o\r:a>_ HP*q) UyH.W{ɥrݥ@(%N*{EҔ(7!t7kR_i5#dL7KhWm6b~6^̟/6T?NyE1ɷ0Jepb#H4H6b&gdsIҕE/&Π ȧ3=ȭ%9#_M/y^ދ| 3_T| n(tl؜"JٗuLF~ ??_ŮCؿC߫ 8*Gt3̒f\2haVu1pU: '\\4b<^ b,j[MuOLXVl<W>_}OϛxH9*[}v4#U qTdKJ$1ZX5/ǁ1_:PYNA}ՁoWzt-߿' Nc>C$u.gĔ pi%ΕUTU_gKA2*I2oCQJ[/go"XbزFYN3NVTxS7ö/({ek5M&`zj9yA'A.MTltP7 _X`Fj0p[_"SzTWRW9>h9!\XzBՔø>ΡtAVy Fxd\ƴhLOEs JGSx3'eF9qېFmrz 6;K\U{S BC˱MC OC߫""易`\KGx{O(oMS~j0Q@'#<2=aV'exrZ.W%JpUˊDLRx[G"W O㣵%^F Dž :k+yE2VBiQ+1ޔcc }&@).c$*= s ^XQ˃GcW9^kebǮV3)59 -":BsHU@ 2ipK_yXg cpBv/q_)YxBqjҕk=V_&$zFxdũ ]O?gl:k੡.xj^s6#0+ZQ.M(0V{ FhВN /tEw( |qX;!]=`QtU&,>eLvWFϘխ/25imr3r៘x&7L5$Xz?F=ϓꫬr>MlrT &Rj-qXQԈ4Vq F G+r}3k;*&٨;)YYNwJ(wRNfɠ"L8vV_uIDJYH,.J?G|B32>D+C ^xKH hia}Wo߼G^o/*gfLNr 'HM ^㛱khKW7LCjf2tߒa,]SGB-ItEUԁR__jx*1*(;Eng)߽lBX1 "{2Oi"Ia6/e:Uh~DgXH稜A󺽓ImiUu]OxbyZ0S[1}12f3utJf*162aT2vu@)fuNhiQ4#vENt{ b&:)|YOO{UQ.]P\~&̢fGxd|UԌ+0vGN+SPXjyRڋ'WѪtךb)(;GAԊ?mS_ RI~s}ݮ_`V׎jJ*oHNAf㲨}*_:5H_qs,%tO~gypc͗OjFu/i2F{3!ݫ1ʛؘB u!NclDame%WX "bڂܸoѷT1~P`G߰O Ӊt7&UWaFo5" (LtA, I$u_Ԗ*b|ةmНm Gg1ģ`K LhR6{ DM 15X)P(Hp15Pw݅"mF>D| 4ǯ-Z ur4Ѩ͙X{jrUU둼3r;z1Ft CHa[U}TWM~Qݰ]DC6,*IʺTQ$OXN}|}>o{קg4B63ã;/19'z^YiU${EӬ,.sF0#ZCg\t[CEgP'4>E "UKp%% INZTEQZS@ԤS0wCg2nQ.4XXaҤzC=~ 'P u bl|Lchmr.2/ԲR+Nݚ7!]z &B GKnE}䯇}wp*aE{U'nxXeEqT>^oy1btiK?MOG קJeɩdŠxTw}|,hC}~bD1X6XBΟ6wT bxtG+8AGM3DžSEJdnW8Vנ&;hmޑ uk&3blȎC}G'6tm`%Q-+]]C}N@.0mLcm'VAKʫ% ʉFi/'NhC}~D1X6XDR'D@itANgK$qp O4艍 Ud|쥲h ' _w%!F'䟦ȶSmu:.#fB/ĝ1#''c,y"Ըwߴ0"qr5q6j18Xoa4$mj#R70ދXJKSF:+a!Jb ֛: lSwLoWЌm kH}x2- z4OXWV5`&^"Y 'v)v3 ^%>c!uX@1=^EzEdpB  ByW V- wCeU"(L2(Ԧ+,r s6p7'  3JlgښM{Kl7Ku!U$Hg$)X b '+Ƹ7`|z㏾}s] c~KqD]~0Y4ݕɢBl+)Jw3%@F;Ӆi3&5s< D+C ѢbyKPz--FN<0ȝ:>QOʃ>mbOc zkB>1XFs+Wo'+#\߅/(9,OPf?-fVo8V>?(UW?.&J+[I,nϫ_j[ohX0TNZoWqY]j=o=qBhgkݶP3o};m7^n]mAL-~T&wgb>IG~t| Hn/u{FxE'r"){|xP}ul6W}owu c2pJy{am/ۋKᅌj{,TG=o%M~p=fx6+}r1#WRfVj!U%N`П5"[QGdey 1SRp5E($ٗu {pgތ=K=!BL*}=)#O5}bv0ęQuf'+M|G|"*Lga:ywX RNT3U/kMB@&7V_մ|:YDi,rx 28z(7&"XdJ*$V!Թy:/=ե1[Wao"[͡42eʖN*m4D*.Id>`xN4M ^ 8c4M+ p0Gоp6x:!z}EF))q'nBʯ Q٥?K?7 ܂eIJqp:jF h]VI X}3nw!z=,('l"1p%U(urk0{M#BAXm(jAՖ$hp *.~3t^jkf3sbLc GieO+7w ͏rHfMQ'קĩJ{m"*qrΫHj>#J.| i ]"[넇6UNՉ >*!Ctq޶NWJC~:Mcd![&bfWOT-KC'ns$P#چ@dа΃S:ɟ,>[D"biKhHl37NUц:9i,QeŜPXT%`@Lge4`)c`F w0cԗ ~L iCi,9 tˀ,]lD~.Uh:9i,# nKО@KjS* 2\ !fĮC!ɻFvW:|v>8.spx"V[&KR~Q(n_&Lr)iBOOYjW5&vX{j[uZQ:j9wa4TMh Kؔ"RSAcW/]f@YF;wcK_> J@%miulPI[<;ſuA:.-^v;L"`5HծBh۵OnXjk6j!y1󅱋`M櫼?gMA\Fg|AGOV.j"\4:(ݨӄؑد>Q; Y u$_cG"h9@H-Oܗ$;ʨ,XZ~c}ɦCqfߨPMFl,8sd^ .f@N( 1.Yjfyj|ٚX_]1ň ve{/4j"NjWq a}rf1aƲ*UroްrVtBJul Vw;7wjy} ̓EUn L}ةد._S8T=tuնn5F"E(wU )8!Px?aJ֝,+NOrTUlAlAn]`K42bčbi[Ys{p\C| pmAJuGPϒtle._a]{ z<δ+AӮ""ݏ=()cjW:–ĕv~wpjA.%нi6f$W#* =RWXyvab~ |#z =Ϡ!`zZزzLz@טj`vDM9;#y2?|h;x6) 4$N mFRHaϣ5Df-CtZlvr4F(Ԕ#&[VEck[)#kUa:t?&.>ם Z>Z-2'@įi1%8E#@\+2aˇ-II9F(åTC!'R_LruL I8o^Rjd:8$Nl+x$hIp \@3Z6z!B#"B޶RTnH(ytpC4+un@ˇœ1pvaf՚v.@5xDcV@+?@ˇW[]N*[yv}!?͑NUV zx|B]W|(nx59! [>]Vb[?,ZrE]H ҁxyZ6X9Frg|]:~CjhtO9hP)ǐ`B/пɸ.'JAs+ w?{6-@Iޛ  X%ҜEP+a6QxY}CН/y@{ :aqܬnb{<j0,nVS,Z&Zf(8:u}DrC!gr{.B$%G@{Aˇ-h6H%s=GMKT |jP PɅ\bcm6pyJ&$g8LJs u.AˇBPv+2Pf!h>/8zW9A a2k=@ Q.G<{} hP!X5nhPsO:!#4cpESClrd*v R5 U6(Uzx|^q Y>rArKqSzxܳ"UA;U=!%RLJ1r+5J;fB(? pd rVF$0r=7Dr=o(Pra(;ϘuG1i9+(x*~QJExƄTwv!:Jm, J"X]_vL ၱiTkX͘UD6= )6\Yd-CϚᲃO/_nuNq됝Q5xμZ=&9[6i#,%&؊=ݛ܇e3o~{TUrD/3m|ݴcDmn~\$y sZ Le5kENF\u-Q6sհ" j Q70%T \;͗?w-QFQo0 8**QM8+B]'{?dwEĴ5G1ܾܢwV۫k{_a3˾U~?7q}m VMu]¿tVwx:6p=<~%z|wW8204NUp1Nh3M>nMQvagm< F+dRT4aϒ{Dƅ`/f٬Mr̖Dga{X)_^;@o}ٴ.abIذu(E|H#Ie_rkҍAxúܮCGtO#91z|)5ܩROӸS[zOݾ{\/Vo-&ǦfќA02֩`)7یtWik Gfp N hɓJV#rC 1i \VںɠV=<2ିKEX|t"!DNmkc!kEU ε~67l|8WT E-n!VZh]H0xkngV Pit*nmv sʘKϳydG ?9oh6pkdA %Ս4xVj:!'u!gߏGW)[Vg3BQes/[ǿZ{Ә\2xq]?[v ga@qEYA‘;s ̐t0ܤX#38"6Nhf#O)NQZ @mm C=<2xFh^m$%sϨgD x 6&}s*7Z YȠZ,8"޷;/[PaYmV0OD1L4NgrCS =#38#xn9Gv43KΨgd/ ,zN4\ 9֊ ? `zxdg! jFTTYU㛷vFib&&L#F*iQzxg:ΉkVZFÝڴɞY(c\e =I},{4Z"!T鶓^EĊ:}a9"q+Fvc'>3jQ:R*TJl9Pns~"@S>Y*a2,{V9AVSC A[c"j:hƍ%%AτByP"mh2:8?5=ȮYMo)ڒV߿mlm6nK&3Jd}ny3 -פmi[-1RLuюUДרC73#ꩁR,0 GH6  iH:Y=8.쯔 Hp9 =8A}Lsu gIz$p{xd[Sl4ꬦM1TJdS-uM !t@b)eԟSϧX ʳ< Al؀czxdGL L[AfXd(nMٱ=<23BlC^(?^}6p@kdIWGQ@UR Hյn_TL$AZUJW#unrK63 c$vئVT7,6uˌ ^0B.k;h?syZcGnȜuL?3.#yڠryں!U=kdžv)t骹sRt,WD֕nju5F!|{mB"!Cm빃gF kwsjl^9[K]GN6Qd]H2̸g| uQH26J\-PUV:lFlfX5j-xb aid7F#kT's#ƈ-FA xf8FN $ĉuLj3$aI>3$r?NȻpD^e@Oq|x[mm]@fX.&Ë9ЌQ#UfjL:B-Vhkp+{E3C2-j0[^(m*g!c9x&tԉo< ֣~^P)w~tE<WZfe9N2Exq l-#{-ߐGJ9'c!Q#QQ[_3D;%mvm8_8fƑS}i&!̘Km1"R Y;h'l;!qXd؀eh@^$<=͜t2AJЯqʌzBrNN4VTo?H3ҙ0,a0 K)(@)=<2#AKn|o0pL:3# _8ǭL-a*QDXuEzxdF6/ (n#38N3+\҈c'ؾ$m@Yzxdo,Se+:H+m(WQ4I^%ZBW&EBU\>s},cdz\H'O{nfq+Ym7o#p_o).~ٻq$WۻE(Lv7`nHu{IN2~Ei[rӶ$wz{ne>ůbl6 +f| dwf(+cpyEԧSlԅMDhD%agŕ%ꎴpktJqI`cH`ViT.)I!/+ywɫ>iU>M`vwN_.s̺Y?JIv̆/ek'5֤ǓI'gV~&#)NQȆCӀky=ֺkBܥAw KvIԌ;ww@ 羋"bإ6G a܏XKjBR&V2"T>QU3[Zlx;ȇkaǻ՝1Ω )WIшN8ƌ|Fi򬤝",.QA56!YYvu3F*Qmy`^}`B ug1͊FH;fnlsB&U4"650aQVƞcPM۷yNEq4魒q;6sPx4B.5:E6:V,Fwo;̂59u<W;&Ħ5Yk$ja97Y3hN]RjPjBl*|(q l}T<'58-)G7C8/YI1?msN<-z;i;!lT,,V)P2O .ؾ"@MTJ]*Aw½aEI#B&@*J.Ho{.@ڮWQl8B;/_ydt&m0MĊl.Ņt?KY0YS@7gs ^d, .08Xv?/o_ZW|ԃ-Ob%0B ܦ WIL?FfmhC w#c2Kb4$* kѻaAc3c#x9nܤJVFďn -@FOBP;u]n[♟1A1 ) ·qL11hޘzaE;DCW±p]a,q)r7 ޟߌ%SРJ4Y$ GFxޤqqx`K*""_NM\01!ܾp6*8{bo1: e#<%1Ԥ3m@()r01$h;HH"4^M:0@p.-Rs{|$"5&I rwE#&lD Ǔ>s?0DƤAo0MCP[F1t=g?di~y<r#EBwA[sV ^Ɣ6sT];t1=ة}2b7{ڻ2/ _-YžK~ʳ|)i zPmNTW-'_$Y"UCUjQF wB;! I`wRJi! n v -xݷ4D]u+|^p=^Tr4I&:h MŸk2(h~0ztHFG_c(-Ҍzj=w#3跾#3!|#CJ)#2z](3>ȩG>sJP_ !og ەb3뒻'4Xv";r5_˜I4Me>Kӑ`Q oULZo?[;by YD=*ϖ(k8`T˜:מTKK̀k s<Ns<96pycq|*;nэ9b{gx&W| ziLa;oڇ1EpaEVܤJF{ܛf@ā3-R //uwaÉƀvbw67;24 F84 \whbq?? uS#:%v F|oؗ > /VDT,;\btw-c8CЄ Љ>`9Bb;HމG(a8ےvc{U( I~Ƃ3=)G2àyY.D6>];t<"D1vc0$"T05"|qjIu G/XbL#@ڄy rTRn L`jy/cP\,K+ .?WGtJsPG2 7f<,n-併\Q1?è1o|A [{E7_ye%Dtҋt>dTc/l `C?4ah@̖6qU/ qQ*B=(:LxխAg֩M7֩E֠*P}Zٍltz?n8󣵝9{}PWeVfr7K霧7MfO!І3dp6"jb2磓K?pl|м NVkz3swoC3wNWfarp>4UR]z\II78sib.iΜ+99;O{?8f3!SNK?48Mi>=ټ?,SCzߐt 3ΜwsqڣT<ezG?'9.pp]|tYiNbs9.qL쬗fQcP=6z`](zpnp+ut"p/͙s|aС0{Vㅼ908;h;.{R9*P9Zg}̈YfY4;ڒnn|`v/ >͑/4oz$3 {n݁Qsu_C8|q* v58s !v$f?@pBeUD\<uϟ~bxgP 9lY&Z?1%-=->P(<rr{; acF7nrΞʭt/qs^>*;t+NG ž^^@y+2: ~j9G5pqׇ!EV~\(o,Y.W֟5?&_?֫z [@߲>([(?mу)n$W?%yl2 Η:\Iz⑼YrmiΧ;*:ro2S *Vy Fyvdj[Yj P$tTIC^v_Na6j&LF!? Qگ&? XWT2I\ p`𤖡GW,L@l& ie(˨˖m7:{/p5(ӫ_Ji#Cΰ,Tp,{^i,UrM-I4E?&{3qXd=Q<+B\o^-C?& ,f:ë&USeZ ]Q&eBӼ^@|>*En˺d07 E?['${Ey;-?6?TV_%uRи7K^…:L_}7E ~Xf^cM%'e !`Hq(rA < WD$'"_HCVуy}5 %25xY/L%6^(Z&?@ ENm<[RX2ez>oۍ+ 34/#toBiQ(W_ RY:ۃebyh_msU#ۊ,ׄi,+8N+i8d=edfƥ0qi:ѕrF.e5JjU٘#|\)gɺqNRL~n~~j`M{;U+Y/r`uẗ́t5}r(*U{^^8eFw{` ,v%bZ& |"`Y Xx[cV&O-MT OKh/ln ҡ1%@:WxlN{om*H>ˣ%?fEYZ$~WI=,N;u: +Yf97F>BPsQQ}CԃKVTM2?#էT#vv?٬jyA=g6շ<߼A32>kWgLVD!F矎'=c'ZZ觏| x$vHH(0@O=1u8RcN39 vݥ_m_ڊwoǾSn^6'Zks!`X@ttLSr+ٌ}->z=d4QlHl 쐆cÈЗ'{RӻM3oK#> \ N}c'\At@!aM1wc 9ո'p ٽ'mfR; :7,ۚ`Bb! .촉y#M0|2v&Cu`Tk'^d/xJ?V+T#y66˔2Z%]#I_iksZ|SF.USV.ߢm/)t`ᵕ;"rX坱v+(2Zs/wƾ,I5Tuk3)W|XiOר,lTJ$cDF?xsk<ٹ}}>m3\YR] NLi@1A^LbgZ{㺑_i .0(E).d;Y Q,0QK$-n}݌_,KV=d=NUgd9?CԮ-ǦO3`9ZJ~w8M+o ?Rslw'>'i[W_=|7WlܽQ us׳ι^VN}eN>me4c'ϐ0|Jz8GJ2o$,52a)OK>Fg\3~4\s]SZը0kIً)Fh`a r+*uOE&09FwэV9N!ax4| 4/bQT59?as$[ulʷ"V1H> 肶?kn@"_m (a"O!axkDFq$Ojdz&a*~KqHY7909G7ܭ1~ཉg$'11*[6xԣt?YR`FF[9쬪dޜȉQXL!axIF;9?GJZ"dXH`K9FO6NlLjiUH us$ ߫gpL%! #;xn!O0|m~S%o<^ssFяNi\@x #;?< FJ"%*K&*q~! oέOgHzͥߔ۳,q;Tek{HZ#_"bsRC8#J+F={]2T]eYW z*O-yxm*NzlkJM[qŵbUu !{ӴP0`(Fhs1J@\: $~NZ'k}(Cio G~'/;?QCU'WojWn79/M|aAOYIL:MwlISY~{>c>wU_}7rݷ\r)V-9nΦLb.' BRZj#ŐMҕ[ʋ\uwpgHk1j9}ѮUj-%J \`ohsHm;H7r/$mRDن(i#dƘ< cZ#JNSrl̴]FQ/^R~m9&9o gD.XxSI$I.G_lhC[L` RQY]t󞍍U3S-X7&ȆoַڐrsH5jْ'-y҃ޮ~rnq>=?=WW[ F#U"M΋K:͞Nn=[=92(X8!c]o.t{iv0}x(wIY1pӺtHQx ӌ>Eʽ: -{smeњ.SҠ%hՀԘe\g'TogoBRKaq&Zk!h-Fk7D2sl$wΗ<ƙS|qӧb4}[\Ytvz$Hޢ6oQwoys+Zom>{~su|Yۥ{[[7D$, {jA|/ʈ-&mr|M͑0&]=USCUz-8zE!C)us$ qt T%k@fyXu "vbHNuIf+jyGT'b #K^9wC$UUB[k*ޠEiԉ9Op{߾iǪ]]P w;Uh Ror8q[DMUAO wcPpvvWgg$1Z5.t >z%JޛxjQPa屽uq!!I$sndJS i #bsE_9&PP+UdM&m'jHGFtWU~DЊG- QD3 #{Hm *`rJ+ZoTzSgH^h>oI U8 fRDi] #ahr$*^%njz: #ax4qrT\W`KTӥ= $8+טd#  Y,&sHH9 !l_$Y&#ax,sT5KKVQ$ $PF9F0X^rk֪ry93ĉeuW?rȳD|uOP2sS5ff!i͑0ZXZֿoi[Zֿui[,wi[li[Zֿoi[Zֿoɓֿoi[Zֿږֿo1oi[ZֿϹσ{M4 %c+ 8Q?GˆBit#kWkVno ,E 0I9F8KԒ$%%4e|h5SxkWc &Dw_h&T y!(q09o- )^HDK ##j&"!UVNzj**QNTϑ0|7 SL:h(&U6DɫrooYGb/&X>&W| 5Y8oA>M/ [LyݎhqC,W'TI`!ىEEM㥶sەbs|=ծWy83{]\?Td/lQ}I!T 6..GiRYk1*X5 fW|KnA8{vPPm^|&0"*!|qv?{W{((zޟHQ"nfBu݃-!KA W,]tՖ{C#ˢE?*41$_KeBGLD[ DMR.F+dj:P!}pCz075}BZ@R)&PruM 9(d8/g/N5}hk:=_W4^Cɧ]$RJZX՜4 !Ā,)%R5t |5Ͼ^ ^9RC{*zV5峺Vt&\GQ%zNAYʔM:Cڿ/;=?ȤԫS_4cn̥kq`?b(S%z-]͡靘4I:]Zt>-m &lQ C}exߍ"þU]=|wsVV׽Ġ_~|UW~Mjz]wO'4۹_xMge[{hŻGdo=vK8ݫw^Ywo{Ro$WWݫxceƏC:'G3 }?ChKF6Ӛ9fL/QC+G oGi2_ސR)ij [ 12TZXC Peu  KZH-q4lK+-f0dM!$}!ňo%:xZ!\]D?1Ix|SEwxEl#\хFtc'ZNWGDu wm$_藬Cή]El'"Ч̷%) xa X8><즣A+nQ9\%)s86pi)ű f",-2N8o*WM}>\yå0x3jg>MLj9+Xp4͹M{G5-z]^K!K{W?Fl3|^˯GbD;+(C/CU(ہ)G&dks!tIӆMnu Sj7//kx˫k\qNvXq).% 6;-_ h@{:!5 һZb ][} 1Zh͹6{tK4:ק?Lмj պ?cy▴2x9:L k&r6{Қ">PN(^ixx1X9] 6%oH)EԒ)Ei ψi `{p׾ v973%Wa*͈g=Q72ɛSys*97TޜzP 2zg4tГə]\I(OdWa AĽJ^YWXq5d%}rim)he,EX3((dQ0߽ ESsJTG7W ?I%Le!OUa=kJEd ےiTjI' U^rA'HׅNmS7j4oy=o?DL#aN4eL#iDE 4W4ӈf,b׿w\|M=lmPNo3n(j>Ly!qfXsJ5L`LgE0=`6]+J+~Zb\qULF6,Wg ox9 ]Ѿ1YhK*-#:jԆRCb VAJz@BX49s"=v]1&h m=!M$i2#$ :qtyg]|# -oɁ 0#|Wcjhߕ0Z)y nd*Thw}[MӤ(U^BG/)q"LրR@i_j,"w2P),(m,lx=g;;.Jk# ޅ}A4LKKКQQ$wD= JB"DHD"#K8mq$r@*;Pd4:kNK;O*EoSTx|KÜ`gr&9*Lc*W!NI҇MKTq-?.p;O9wȀ3+9Z0\l6CTz0#"CΠ :)[~$&&#P=LDE9[WR[Ӕg{'`%0/`L0!a 2KHS& Ɠ 5& X8;N4.y]E'u{}usfŸېL nfvki,QKLXK”8К (!|db cX輗r=},יk|cÍ0G5S!=,Q`R)%J-&\2xDY4>d5ք1׵>/1hJF ^ ͵g2:oCuGBptQp%ЉZY::Rb`1) 8F.]Sfi#Gg]箚<_Ǵa~{ su$zqPpKt*#MI-!N1v@|E .S54tb;"E RrTGdṵ4N+ kE`#`Rqp&z ?ȑ1*ҽ!igtepz!^JLXָJtl\xvRf??Mj/1>^|kXtμl8UsRm =oh6[+[R?(bkPVaG#LQfS3i/ 4lU@P& -i:$UX\#/ ;gI°NB?0 ?yʕUoT|[S^r<~TRM4 8 my7zI#1 _}}`b`CqOEz jo/.gNWH1sl^ve?_*SPpa!&&t7V,oh}*4 _悞._g[:Gy'7պB\:MTan#bĥ/!11bTss2P:.??'pGgN_*|x3L_N?f5Ȧ&Û i8yCկZbFϗ_ ԥ`2dWn@A.pM"qW)Bk1u9Wt]P_׫Q.u5 "qtG6*9XUv"!U !2"ЃЍ@#2J"vK$S${n8m{B^*gn0eVktZS=R4`xLkaML'2*3x"|@:@6>qԝݏ5YcZX cQB^rÜ%Q&[ZcI"!t i\u];p gLuG E-`m#"(mʼnvEbS=W_xuEo؝.;.6M ť'_ht_aS~akq\%g_~-`r񡸪74exR.ZdZOa$@DѬ~ՃC^b\"YzB =W1~sԮ r?OtS3vytYm)7A5RaK{Ս~ѕ{gAg:,5y!I|_L[V!i&Dy{+/?f.j|<ÊO2^NTOrrΔ &W7?ݰǽGMOF䰙^aK%y;۽QLҘb}\) [ڟFq/?I>`Քy/NZ?z^EI%, ŠȵDħKj9na.U2GskCD@c R<KCLapGirUش+pvsG,2 +P;G F&UQ=D{Ժ;i7Fg|[4Q[DS\I`,hBA)RZch09PDV|+gGE AoӣfAhapz\C;pxM`,(U2K-SZ8Sg93s|Μϙ9s>?9;zm'|mH*3{>}| /1F: lp6HFjRz\Tknl|b0l/VMVTַ+R;\;@Ƃ1$0ΥШѰ: `=t0/ Jc{?kt4@\)Dy[ q7V Yiw/PᓢޥiOI^2_VEй0Wʞ5ZH Xx[R3J5 G鄡ٻ6$Ug7w+{7bb~J)!)UϐI1"z؉tuףU]]Uiǿ+B0',r ~΀WeٯUWKxKWd=b7 5uøz',-lUH*TqXzvTkig?:Kk2ZwMݡdIX;?3z L̮1%v*p&8-n>_-pg+j`.r}x[^iGe\bפoeRoz]9^^t|iӘgvv[k84}o=( oךl2ԒQ;ąmtSܺfoEJ邷x 0gkFj:hǒ MLoe|N i޼2(zˠktz]mFZ8& i-1doϣ?~-9`K*Y0R`D$_guZW}:3l-V#(RN9c*8lJ Ha2Lv`"{&CD䘤[3X{))qJ7Db΂wDðq;Kq kzç96|WbnAcbo?|u߹ciPGkL!3QHKl,@"@ԘsZfmEy1t6U2φCz su  )8нIJ7άXZqpp1H#<"|R c;q0Xi(aÎvb5|b^QH3XARo|K%RɷT2:RQ,k4xBXtW͂T O /'Kps^1dR!W#;W۴1ҷP>G_'Mr+ek mc=x=јwMXϻks5Ön1]#ι4_; spOLG 3b;ҍILxb _Xf禘AխO)<7݉Ą@*Ƥ&oIƯ9eB1Y?qy] ~f\`٣doZmI/AGQֽF{|Y;kTEfnUuSyVMCG(fc=/\{mK+CV:-5׌.r|LrGY}nV.BtMF =("Q _f.M^ђK: s![.m\73&Tr\71)1b\--(V-JI-6UWtZljH񞷆h޾~$t.H[z nSѣBzp9c)׃r>Szp99M&2׃`+׃r=\.׃r=\.׃!Rzpr=\.׃r=\9B=N(ǭC<@K?@\ a.TJ @tJ_Jp96#x |w)#4P-QDhUx:@-f'+suZhTFJA:F#!8(VtL,sF#o50!S&aJ(K)µDX/5^#ZhF沜?ԾX$Qu%lu*K톘HxeG_uopq-'Iװ_!PGX gDB5XĚ3C#T(k$T;t SN 훉|:HȪ̰4R%hD0AP4 3GA](܋ПVWHAuIYE,M#B87".(U#K(SJX4L ?]ښ(x ,G2jv 0H H]lPE)5eEvCЌ{lGBH Rb8Q8A[/E!r<㧷%n;^럥5Sj$Y ߝ y){F`|'o-֒X ^UfE'i?B1g F:`ʪs?u"v^8)5N7X+pR` aHg:QL]njv0W@0! H xI֒K`dN ue?& j?q`|ҝ| _HDŶXLKK&X'޸*)rSSΉ$7߮a4QU&ϋ9M.S~Bu?rE`set׽ז꛵]A ԟ/BFGb|HW!kfQX>TiP5I:[͘ݫd<9:Q7UHQ< t$ A|:q(Hfwp1 ~KR2snp%}|s7>O?||GLdz}+q+0l"N?"@BvnoνnT\xwU߿]Jbz68"ܘl}(ٻovDžbӂHւ**րux1na |\wUR<Čvd3>"p_SLก?:2~ 9DCiBіhDFI ;I-[(N7);ImYR?caz蒼iA kBu0tO-% |gn&gi"(Dަ l&z%i%}ׂ MN+:0(3hhM& q.)uZ[#19SLs.ݤm1 H-B(N7&6L:5U9a4ew8Ir`]-d1y sZD)Z2li%%Ys^K_LB:@sLG Emӧ#"(y@brX+HrY諂ē7Ng1Kct vy @[t3{0q>|KI霥@Rp._# IdT᜔>ؒҷ ǻUTy(ZdlEW1L %CPVdDR&q:'cɢk7n'tBI F+T4h h4!13L3m"gw.uҍ5pe倓'ݵVFJKJ%H4Z!>Ft@W ͚ۚW?? zv]F Sh r5) I(x`D}'[ŔNfDHD"# F@#WQ l߁"iz9 Î*|XK9ΪtQI9*Nc*W!NIUM\jN<]C`wr09gW^s`RT !*c= VkPMzO~J~c͊O@K`_H`B$d(b4m6č'j0l9 &p6G՚L<`7٦ZbLt,ZΥ aJHM鶻5ZȂ!ŀ"&:ǰ9bDhK,0zX5Wp7On0G5S!=()$X"x@ 4FjSNsrգ= _v&!jiY;ȷ'vҋz>2+ ROf 9䁒MB}*č/M`ܽ/Mϕ/?Ut24l5(F"-mLځ$h.7$ C8/3O`lJMkng)t- &0/1))* Dhy7t{umA16}o_`wuɲSjTfbzQQ9VW$JQ$Y(ך!|2 u 4F? ;74V}^vjt i—yfZJ+:ʹLR_ٴM^U|uܐ}.J z5 c*F:Өr%"95H"r/lQy0մ%rgYXs(vμYo}s,F߬^tZv ϊW4{z i.~VR砀>t/?ISx3~. o}nV֞yE*}F`-i tV= )T@Ag 9Ƙ=4bW|ۊBR)੣4@c XQI;OGۗƢgp4X;M2$BBq/\O{FPj$("1bdBbPˏzNvҔyޏZ{vsvћ-+(r J6&&% )γparKQQp3@*8aޜJb4mmR6GiQAJFQ \ȥ G3Ɲ5:$YHa6yT,b/AOȶG 30& +i2ɈHct!&I)FhySy"T(S0o(d[`NaVLGfC}v8}Tz =SasݞFҥWZ oL'$QG`kާ[|qp>kעl{hBE"Rl~{M@Mxvu2H7yQc$4P빒܆D%7)')rāCB{C\4s)%x8Smi~MB7Z1-2B\d@IL${[o܊2 nAtym6;ҝ:7"نT磫U 70qQox7z KgB0Nb?ߢ꣭gpܚ&OwƏg#ں5~oG'/f &(dn?>qu_oGm7H~_e##>FZ6 [>af2mV>]ZvÇ1WъM-գ/r٨*J\,HX zGc:u8gs=O?H?ݛH?8|毿_|?~.)ӗo_~F!zl b)k ]&d迿uXhO_o}.>.׆#rA( ː~?#][AG,EKmp g1!.%t\ZNr|~2*a}K%Zk`#ǘr|453y"V+BP0zA89޷_S5m.'xftYHR&\Zl1ܦ@ttxCB;KPk/C4!tՠNIWMh*=ZtZZjq 43$&.Ф?{]{U73oW>5pj\O::р ;NF65tpysհ.ŸfX HԙDzI>'ܯ~ d[;i({6ǥC/>J JXF6Lk!Y𫓞80 ^n+y ]Ԣ aH¡qfL*$ h!O9Ѥ˅$XcWDŽ8.]E7C40n4rk >ur-=uhW-h_+p?pWKS6:w}o/-tbvFφ 9efW~.^V3`)kLHڅhp`2VHu h kk1φ"[<`IN X8jAn2%?(fhw>oqCqe=Kxv'52J1hښ"uI{ST)Ƅi.$Ol _(c^8ON,O{Qc',=f5ʎbqI_hvmFMƕ 5j昶 @8e&/f[V SwW^a:蓈F{k!2W: J˨<$BAѬ/r)\ІHO$с<|2 YFNMg=z0}%(at\`RxFLSBlx`OR+p/W]f?X__2Y2~3&67rr941L>px~ƻ\g ET^.OoƸ ߐ@/Z0/>$8$öv7r\kM2: 4̛KDj t~нnYCM@ XV "$OXp6!C5"K @3K:ePMt̫y&]Œ;(C@,зh= #iԎꁇA[( ;!)G-Q |G-Q |G-Q |T(Q2͸P[,"nxM6l7٨ofjxU5^ExFց*Dm>W`ReɰI 8? 8N$qx)ܨH27)*C+,Mi8dCccGD% (Oŷ1YZ0#ǚr_hVlGSKXB'$DfAGMbNF"JȚ+A0ɗmR+@I9| !1"JJ)oKT,58/SjʋtEMq)ڛ錻CSۻUKdiRPy,3,r*wFKb!>i)Ȕ^E$+$d\bL >|v.)wem$I~))CڞiݍxzamylSDwY$D>DVEEdd}P( @6A9RHoAPI(T+&-Ȉ&*Hő髨B @Ѵa9k;5*gݪXԚG.ݐ;ϱWvM_SrqB<枈HzhR/u =}O#^x)s&x5X &FHx QA\ j jab']줅t-+ H )LD R,ҔF$pC :(F vTge&n{k:kr]~9.*z;A6Xh9֧ BG 1,192@!K.уSƖȶ‡QlG8yT`!XDH(%`01RKER/ќxLXzw"m4,9:AicusmgnsGxdzC.\D?j ѫNr N `#*TyL![)+m(pL)%2͔&>\2xDKJcklxQlnrV<qZh=yhc<+iN9jeNat.\.%6!C2 ǨU1kQR'cJX/5^#ϚFMu :?%W3~lO#|F@c"f+bӱĚ3C#k[BbR9$Z2׹Z@3[y%rzGDJNkL* J3`DI@aO({T sbP{EX焵VY"ФA=-F形" T:v,&?xj?T+D,r-ɨmytX,XNJj_aA q8PВ62X*bxcKQopiEe'ἶ[^j%5kj:3.0l}LMc/u5L5tm%~q7-M?o}1]Z?+=՝u1r,O6 ?e?Q> bWX!SDSDg0( vCPWJ`! H xIJK`d]vJ a@d|9O5DT*/nӯ 9X rXoRfEb$"c)B>hhK 4"$B|q-W'[/b$0.'xftΉBDm*eDTtHfm Ծr`FnFˡN#:'ؚx"} SR /'~)a|o&wyaAf:0/s/ D1y sZdD)K2o+L_22>bX|k`51K^K髵ڙWAgwk*&:Sc6eEOwߣ=\iڹ|[īBM cݵ^y5|k)w?fzOfaN, {>w1ݺ&wuߒ.cA{rYl]Mtq2~@6EdKzq‡hyw7nuڃH_klnZnZM(I0̣m{Q7zO M]7 ֋i#@z~"=IpCky7nZ'?ooFV(1 tu4_vu[7OOC `ӇA{Ǫ Nyjj-6'׵TP3O@%-yg0_NY0.'4N ӳDg'hrfiHЖY*%JPlzLH^~~d8͊20-2U!<=5]?2>oZު|{*DÄV ֘15xbUtdJdbk#m&UJmTJŪlbݬ[lXdKjQVyvB)jZȪ&YAn ےTcAr)L()x8&嚮kw: {` upݤ7ks,|2ʰMFSvtF̿lx_87 /` (Ivȇ? ެgqJn 丮rg/<>dhi%/ A(ltow}|Cg$GEr 1ˆNlcgmI gZr9AfӸ=@Ӹ<~;pJ1F: lp6HFjRz\Tknww0wrWǛ\= [ |& j&mψQk`l9^neVvi;0Y\Eؚ_EZVʼio>Ix 7OkBaΰha+1fDk2}{qKDfe%A}ikD 5j:Rj3fȴ1j#E ~<X&o/l~\tyAkew ` lny$K8ǔO?l`+D(L!2 ,%& lݪ6 .B@vde訰+gGQrT4Vi0"U!n20cyd =!"}2LR&fp8%b"pc1g;Ja8pƺ>6~ʚn]P惣jGOs^\JC'0ySFH93A@`2F!-Ѳ X *&O% Z^-)ɮھI1<|>E*3oJžiҾ!5rRf! vi\ZNo B>t]=h"iY3m *z9XHof> 9<)<*|So[d"gy˻"8S\by&myWa >,O];h,6Y~{e]/5nembR.c܁" 8FLJ e tk~x_ `E_ί=h3yg eNg3igB<UX S띱Vc(4=FMFSM]btjyPEn;n5rÐ` d'h(7OdN37Vt'ݻ4a*v'WvmeDz;j҉W5nR];|lj0gV[]=9Ax2RȊZ]<}x="ȧ'WtSc+:WO+, ^Q*v4?Z$msggZE~myULZ s~}t$lb<r[/܀6OXӿ'uWáIvr4vK`Zˢelmq~1)]{^RM@4Qq2ANH*x-.&5\DjehĞy-[-S 8Qj%O&w񤬁N9֩-4uYEcorUqFͭՁl l߰Z}2"xR鵙A W0y!XƔƔgafDr!_h` yH$xh[{˩5Kφz(M<+6+t=\7̐Yg B\Lּ1_Yڕ y4_h5O:mlU VO+u@>-D5 SZa鈫 ~.XytZs?"{6N5TOtypq-3EtɿGtTmS9sb3J=X4>3$9QY3(&s3RzjEWԤ۬zɱR綡^ݚHUh*r"X `U0dʊH a$NRss"Ytr@J'0)tBJ&D4rfbiCcY4Ph8`#"]_heK}8CyU<=yE#wibE&zT~YN(mB[KJ%H$tABKc4H T 5 .j{$wl?= 5ltQZi.ODcPD0\hͨpq`z%IؓF̠h$0,wʴ-h 0HUT!Hwhڰ5֝r' fB}^PER@.kmHe`@(! yHv 5~JAX[t}{0gݾ:ߎ1Nq-e1j si}J&&JF+Y0İ0CDe6M't]<טuߢ> #pQ<BރBQJ%9Jc8wˣcR_Oau4&!w]A HՙJ^JzeYkJqkkͷK DHHDZ)M 6WozJ)cў$L(K@il|&+Vݵ&]U~qV=Zz|8{-4מqT j|D 41z EBF)5΅u+¥&{S&pZFe)A@xrR5itp,7j謿-7Z4S0u$zqPpKRZCM>k aP[BbRݲ@5Yzn8-r,z4YTxGDJNdNF03,*͸J3 +"XA ܺ]@|;gwX X4G`B8ŌV`~K(Si۱hw(_\宓2M{EN#ED;`sdG.6(ŢvRJk w t'@h}J@BAO Q8Q8A[ᇢ90<c󽣟LmR9E5c_OBs2T^`K_~o (6Uf/s{Y魼 bf+d}`3v8)0` aSD1u'(…^%(0ԽJ`HUtb `< K_ _SU>?$(e>>3@/U9\ecS)ꍊm/5oBbܯԣh/ʼn-r DkNL¿`'|T0jv@O~niv6{=UՃ|:00\|q88;іkւDY>&%[wBz{JG7tم {X>J?ë>r>9v^UHВ^ওX#chR"d%'8KzhTL6klgn0>q O?zϟ^&ӫy/(PVwlk>XWnιW-@UJ?fNAR1ž\vAU9i}_]T[ڂp&DTX*/fwc5ŋޯ-W]TuR!|L1RlTńH;qwJr?&6H{kNnIbwh!oQ!P>DNR+S-9ImXh'xf[6ZkjG*X\ɬWN\hm4g{UZtZڀ_bnjFw%y[0;ʀݙtKL7n @%rw7%]ڒmmUӕ?)ZȔN,$񒎟ejCP D}(gԥ:Ӻ j8vwW.G0LWs5o]A郙iXt}3G-rDf&sTTdK+IpxX6Oj,ی,%fV[).yӦ owwfFS9;HU[Q.  3BM{ω5?J&߶?%oCudBANFַ?Y&]mj˼ _MK|HY%}DIO݋[8kކfbKyzn迥c#h8p82e !.g܌ P.M}3 <ZFtp[:CSAyGMrD:w󥝁= zuKg)V?Ӫ0yB "kN.)ji&Lm}^JZlP$OT$obD$-oyzǨH4-\2iOC{Ig.2ei*axZϒ=5̽yIZ?[mox*wF?O=e*9/h>f, YW>_SKmTM~o^8}5Q 00g <:M\/{jzUKkSx"@EjI'|Ra 8gUDQ:xZ Վ>#"!*U<%/5"g;,G8C'D9Hs`ւ@(&BN;@vLTf.$Uڱ{EP1#g@z@,Âa2uC8) vsFq̓Zm)bpqV!_ ,TLIqYy:ŭ[m^J=,s2JzE `uu/+WSi@%T1)(ӳq\0VϧuKQa}$3KU]/wY9Y/_xCf8cV|+7=ak0La VI6Q|Ir:Q޼^&T+.rrQ^*$r {#h8e8Fkj6^xdχȍ{;. 4`k)b+t@ A29Hy pQ TLm<ė}=[񹩉BZ~ui 65 ZHf7-{:`vad?g9 +Q[CK1uX11õi\wAa0K3 {rY9"-NⰦWO!BG.[j퐭r!G'8Nsv[ #e]׌#K&; ei67 @Kg>Y4]6(N9Iw.ZY4qiIe9.@ua"OZ2ڇ:cTq8 U>+4ʑ>" _M3Y>*Pf`+aLβuM~O1蓞R=>`yJ)ÿëTBM#0.͝-)n8ug $!{TbV":2%%1hE;EUKQd_y6+5 Ǻ+FmT.,تq?{h&&9~r B($0f4)|5#,R\I,8ZoBm`{\ r@7gTVߍ|P/L>7yȖߦC_iy(Y}9yV{f~I6}wj'g-uװXۑm1IfH0+UTۦ ,<.& mI7rLH6~֖aݪA߼)GEMTnxsz$mE .acÜ79-zD)cٞ5PV8dpuBtpcB)7j<Ȱ, Ƹ#aREbD#P=Qܪ9x:]|F?}fzaY+л6}A>[NF6,qgƒp.f~0ƕT-OF_ŏ`rGGՄKS>"U:Y+냉KM ` Xyް2&"}0s Ը&a=5`Bo`r޾KؐD̴<8r*?l "~ט;y-Xhm 4wFm6*GT KIwX!ӄo1G.Ѩ[$oqkr:D`j <kA[d/+l$Grʰ@DiuIzލU9YLohb\#3pڑ["آe>g#ӧkBkxoT>T֊58kg9tgi]V5 gjoT=]YM3 $vV>0=0>/7~U|fZtQte|w*oV?6MgLG eZ30{-#chnD4qA.W}_/Ae9 IM ר]QQ8 ҒPNOroD5![Q/`x4&Wkޚ-𭾭]Xmr얖G[pccKX뼹@d>Z=X[)Q[M*F]ioɵ+ &k_ <ϨUbD ߭"MD-@YTWkd 1(lk/8tjEJ&.LmL4:;H;P*YxN0+D{[DJGt`ZD"rRNtV!,r!cn*Su4H rl-^0G6Hy;[,: i8XG!#1z ",iP΁;KFbIEٴ!C!vP@z~s]kYùb;<&ڄ{$OxGF$F]W7.\"ͼ8g֨\j̃w4jgƍ頀`V h5z&׼9mB(kB.0PZ\3ruT:a]nVuvx{g9[Ȏ@A 0MB}R1S{14yEM#K,JU^$m6ivnϒ( +?fD}+4{Fq˾T`.@^6i어3iy465=VHڬ$lbVwdn8u I `kЅP%Aڕd IKJ6N#s6 tA΃ȧ 9$1j1cgmI 9Zr;AfxwePӔCϗ<(ciӬ l7yxt~+fVSڅO. )~ p >gk8 n:R |ns6;˜E|[z-:/)suNך~'Z5DyZfg+D(\!r -%& }l=ҏMlG&V;訰+gGQr 6pkUAC0 CFyl5,Rg2DO SDcK)87s$#-gk;8N73<,b>$6ƚg_(%ۆ?w _>i){gx93A@fjZbe2=aE\@ ω x+혰XHRsScN<e-/Vx⛧o((EOX@wDR yL]F;Epԭp|4_ƣeΡt1T:/Kos4QvG~8gJPr~{yۦN»+Obf j^)]y}}ws/D߿ϪWyi &>vGv5.:b*.֙<~6uoCGXT9s+M|.dBu!UO0*=`k+'>B:[5;V{=%{-4מqT j|D 41z . :==F9Eط (WK!DŽ Oc*]2,<̺R5MuBΏy?oZzԓ |]f&8f~Ͳ~ wQe:E8W(8%BJ!Ě'Fh(8֙9$ ^k"т"R")9#9̰4N+D@PEB@p¨\lC"ФA=SB:ftbtAyrYPDx2TǢaR8߉uX |[Xr$hGEC0R9#.6(ŢvRt e ݬضB3 -)l#KQ9X/~( # 3;7目ol;Q1>K[Tָduze\x}5̲_tcvNc߿ŧ-tþʼ PO͜d.;7#+yj?+n1cSVǀ/З@Z CzXA 6@SDzWwAt.ko:x CW+a UiC(X:I6v"Ȣ+}*vxGgѬ b66> }.vkve %R(U}:9./TM6RߞkT wh9d5؉D9t[cJge[u(xzt1\t|ppv^-˹]$Fѯ׳)braC%Y[k7um3@h\'4ɊGï.\f[GP|u֍UHВV1Ec$5A|2T4$ 6q_ޭ- eeQvW uO?M?߽_w0QN<7`q30 .]vxtN=_naUc7#`үEqDz T+;`(2?dﲤLyң YOUZhZTb]\_ՠ(K_E>1ӅKEu{mV!D8z 68F[J\%C$xIjrEq}# w%/18Am29QHMc֚ 1G2k#gBu3tuZy3 oqn4&6Pmim{Yga{ȣLyuN|jFQ'~bim;4J~~i|3Z9okW͐p~)mXhfe 'WRtD{"NY^o;Spr4K7%m\{bK_be"vIMf>Yoi1F;S/nnaҔ* z䑅H>pDp0 ` Xy$RD[^:\||ټAzӯvAyX$vt|Iy0+M(jd;>ӯVf ~;ov"\[-(6j}e!p,s-Z#bo,6L5"zav)թ{Զ9HNdmq UU%NI˭mwn?ڠc)I|{ϯaFS X}Kv閩<ҹ\M76̪T{LP# rʠoުt$fZIE|mH36޲ӮbS'N:k_k5Mo7z?j53{;9T'[8x`p~xTܭc5~ᜍ'S2_@Ӭ.7@IUl ŝ~yM+1ܖOR CMURMRႣaג)E 066î6sb񚴹'kڽvzyf1qŚ)XӔxYWsmtytr<)ئz}V!BštR-r/UC%R V(o7m[(V9K!]KA=s* #xCVk1+*pe&aNJ|^t͋I7aZR^n]>>h[޲,?.B Q"wL׌M_J`d+Ʊ.>\ 9xZ3*%p[FaTʝ`Bϵ8 rܭI}Wm&-(8Aӗ(0[mx<`1Bn,@67G5LS}+ wzVAے =ۤPlR)!bbMc'tsP:WM{AfPL)۷N3-b:Osύ%\IdT.cK'=:^}6Ҩ)a*XLD)" %BXI9໶˸)tBI yBJ&D4rf|;6f[})c⋥/}Ċ&ӝ%q8MZ-Leͱdj8K um Zbm1@N*r =sw8냡Hw!x@^A1R.fT88A9EHoڠRQ"4‰+&-Ȉ&*HőWQ @Ѵe֝8;>bu02-I*䔳UJq{hj%똊80s1DD!vh7/E+E@O?u*Sb[^x)s&x5X &K"1De8h՚f.N⤵ǚ?]H`B$d(b4 Ɠ 5 uG+&/$I 71Nq-e17,ZΥ aJHM)O5ZȂ!ŀ"&:ǰ^f;"з +]􅅞 Z}["W #pQ<BCBQJ%`1R9')<ڃ뱠5 etoqz:)s+A(`z9 G"Hnu?ѭn6yŶN4% {#vcĜE NVFoF:+ -N-f Bې%΁~>l9 6OeO$>n떾\J}ߔiVGCۣY ޘK-s8HQ #F:*Q_Q$wAo!Z橏Pk)E/ͺ;\oRBͱcV %~j5J|svr{K-#h `?y0G1β_Ry NBP-mc"^>`% ~’zR@(yThL!/YdK\kBxDQY[mT6lhJT5 $]/`) k2Jx#eU|.OQ6lZVKV-혖-F[B{j$BuQ͞y獶4dD'JhM_v Ccodô5:֐w뿎[JrnIMŶvZU^ckcwv!En)m1BEb܉O_+;  cGSR_Ln=pjoK͟Zmͧ)oq47f 7c<߿Q>x7ڑG,KA3u'J#(D}@a.'`bYrFGwT-<cB+|+eQMTsJ1I>$LK ^ao\I$W,ivN pi4afh#4 /#c.ϽBL R]- NXлo ?7O (b VW]A޺!jLh<5pUF1aHax [wpS oND\d5vH,Rdp|].d]^kG<8 J"ߪ 7n~E_!O)8UmdѳAܻčHھ yODޣ uU:Gq̢c(_w(rB?(|e3g>^_ 3<ƎWKA---=Oڍm؛DWwyћ޳w{q|7&~AOMlnUſ>J1?|/_Eܣ){(Uq۷IP&sF6Ɔ0bM3u̪%<_g݇7b6]pLa>n0L?Dbn* 0R]'OrPJʇs 1Y})g'Xq&qz&$\c`tǏO C yfKp_N(0CC?yoκz2!&DYoqجqoz~xrb41]p;Wzڿs')M=Kr8BsF~܆Z8'f/"\}b)9)l'=on&z'Ɣ7ʧ2/,FBT \,6z7a0E 7'sgXrQxDfN9ArJHy}w/ʉb*`WzpQY_W` `B_Y?\?."?k揽~T?rDv$bu&0NS0QhG,+m=2e_ P" HN.qI0uϦ[.q]RrNRH )x"78AQ W*zE'RD…F <4m?8ǐ]P\0h%Z3+aKWR^nAy :k1p!Z7  F8BdiWQXbE^x9(mP\0dvuXZF'] NNR*0فׁ*S/VWEYeDYծ/ t-/iiITi=˵6ï`+&v DǙm$%LoшP!`[&|AY1y.;N^pN9m\58&aXmT/U!r`jdI gRXfAp㨀D89S%~=Fxn0w΋BPk=OGW|l@O0ۨbdf˾9撃EA$Zoߜ(Q-ORo rKIؑL@ Xt#W p& ׻rde6?=sy^63.;ݛ.rh̾=56MV?|t3% >mpF;?=_6Of͌nG-/ۯCKOnׇ\xs{w_w%mBR1NmeDZhNÎNG#cad$.(tBi2<$_PZR+@䤎GPCaLR8Rq KӠC0gt5&{h-p%/)n,F )">+Md+ӷ->6lr}CyQ78iRPy,3,Z*wFKb!PBJ62W+!'ٱjg%\1ר/$r\HcKA0+9Ȓ%N+jg$LBQ#Ր7a a:*eYC";rb /lghg]}}f*mo?-u9aHֻ p%1)ӃN4{" T& 4עSGOXy!t.LE}.qHPSEI[LHB A-?8Is=}m?C+umIk]#pdEaPp2@88Q66R,:r(Yw5t"ٜ3e knk wYFY)E$'ṛDD,H0*M,# cqGb|Td_㣅ove#JL2 cg10D DR <)qǼEqy39r(!Z+l[6|ig|lOiDN%$S%(l7F%6yN0ܟuo/u+(nT܀z&Pt jvq9I )o!8B2Z>Kisg#=Ppl{hBE"TLf.-wKfI.3)62Fa\InC"HZ9irk\h&6FHp;m<8v(Cd"Geh (֕㵭f3An_n`p}Nc䞥D"$qgsPQ#20Z*c^*#%GH_Bw*PED7H\ϔL)ǯc JJSV*h"[:݅ABcP Kt=!4XSa2A%DzA%D" !ys$<$ JP$?0js-Mr `#x- KQ1rHBOX_E0b/au dsx%Wkr(AP!iy< շ*)6m0 6Rf b4 ͸O?H(hIaPJ,G1 }EQopiMFsf&YmZ>&FڸaZgon-I``x0jiRIVvhy]wŅ{xCBS13F0euf {3"XH&izΰ@h"]sQLݙpiGwgYy~W@P&\:$UX]#;P߫HƵc~>2> }ͻ᮸6KԯZ ~=^^?*&`m *(M?:Hr&&X'.Rɇ{53Ń15Yd_kSݿfO~p3h>xHVtFb&睹48_]0/7:{PWK%Wt nFfևU`,UjTMX$1gߏ+etMdYeq uzÿ?}_|x f`\ .IO旎^;z5tu~/~8\DxX^6mX@!c_O {-] 8ׄ,B]zlnW,@:Ð;r]wkkGA}r.twS:ӌ R6I4XZoQ!P>DNR+-9֩$j5;mq`G!"6Q6ZkjG*X\>Y9Ѥ?rsRd8rܖV(o"evwt;b K`i%0*T2SmLm0ZlrLས-Yr*2u}y3ZOik&)sc|}C3',H[ o^3{RR阠/ns^ہ1T31iUqdLVղ*-,֩9IϾve_̵kh/<-{ʋaQU?f2d~Wϊ_$׺v }׫WRKיh>|XotW#YPT{M:d׳t|NTf׀{Q.,Llu6@z|aiyةkYNR/kƷ[sr2(:o;cH$"﵌FMFS-#C8 IM ר]QQ8 ҒPN-J!ǝw2N g X{;›ApYxuMu,XM?7ʬ8cL(֢tS(#rK%bdF/T̙IYR]_Vusz 2!f3ޫ5'V4էѰ:`fU4 xN0+D{-"#a:0- 9r耥 aU!,r!c4S̀:$ D9k->0Gf19c ˜V!!: d1N Y$53XѲGZiDڳF@!,{B6Hksito䯔d)QT8f8GC(vdF2]-fqɐ%ѽ.κ?v.)Q҅B jtNti¢G)P# s{!~P0]>xaR[ERcib ܷrugMN쮯FCw>۱껳.Qi^BW+BUImZ6qʫm -L#.a΍S}y jimѼMy̴HB3<\y[4>Kh^A6ˌB{3>=੃/ 9-dRĔ#"᧠OO?eeG;G?s Ya), 6pkUAC0 Ct1s2,Rg2DOZKI S"!&"Ŝ(art֝^'WəƷ&l#2lE=ט+?+>?*|y軞.Vk65S˔- ,E T1kك7 x)aWyٌvSXde J}`]o{L0LP1ZA-O3Q>au&~,nV龓kVC:SymZ_uΠ:35PSrVz}0jC2hhf]!Lm]7j/!FaE %CJ]_tM1o#[E/~Ϧ 79Gv}n45-ÎԕrZ?g+Z?wفs8ewW+mLpbW$c0o^jSme>(6I-ڀ%lvU:{NCfNRzh|n,!93Hr$c:rB=tMC4i \EN 0 &"ĎʊH a$NRss"Yt4BI uE+T4h iB$ Lc,-g!)fߙa@mK7 {@W:Ho m BG/)q"%TjHKM y'bڶۘ早RύrCBT b&Fᥥ\hͨpq(s; ! bO3P"4+r\R##Q @Ѵa9kl:[Y!W>1+upQCAOC.qUʥY9*%Jx阊80s1DD!"Ф|}._)-c#tF,{ᝧ;Ld@Υ^s`R9l^!*c= ѺZZnk6Nz@8Mݐ)PF" C94ph7n`(s LVq48Fafܧ$ \J,G1 `xc78D4L'|g_2>K)*kܗH:2.xٿ|SK{'1w[NmJb"2$)aN~.LJ~ YJWA3)+k?I,EsH(atc H)")"W";6Qql[  N[lB!R$M_ݺ8N??K}3:0>YΗp]BWj_NN:˫[9D`θ7J[\CN1DJ ÿ'`'.|܍JFxꬼ<eg/'Eld$fr6KۏEI[]HeAɸ8 7#n/_1F#1z˺aH0Z,@h} LTLV|]O =ٽ<AGQY7jݳ KPbQ:2> ]dA̒c C~Tެ/?\PGgoO6w9{&N>,@q\7>5ϧ nV@~~|ot_؎H sUaZ7yI}_]XQ[DT*ZhZTly\ LT' H;9liXI4XZoQ!P>Dq-W[/Щ :0.xfVtۄjDT98Y9BѤidSgRΓUZzT¯j~IpK7΀5cRDsP(O0{^Iт׶ iX~(+v%?K}G.pny4K/`Γ6.{h DQBoTG/|ַfc惼`.E֭UK^L|0հ֫ LIɤ66w N[l?Ɇm:.tuj8RI-[M[+ESWuy(:9_Cp=O2Wݼ>mSvV"fVNrmon5~;JbF%T]y-{MoqgiUZ0iMx'آYoxʳF oCr/@.rOZ94l*oWT9.ALuȹʼn0rDZRr* \rmP&qx|E)ֵ qv* e:d=ME*`0rّuxIZކ,@ٷp]&P?β"|n˵|`U)渻hs)c]dɇwm>Il^o86cY*\luI]=`A%A K+;Dž!xY%g܋:$v>ʀd”檳 @Bj" {F ~n6>K!XqnjaKj'Q2[*HA *a=!?]e h~ ZA*(!oL^ T?F%p0 xq>fIBrkl fy1A\N4w'E/ lr8 zh?(l8zp wx;(CD ^znsCDm|9+>*1>+lKt2ë^̜FCS I:N #qL9*D{bS{B]H %BHbR$c}+KBme)^x0{R8K,d,Eu7r0S!PbB%ax0 +RɴY2\S+1!>==͇"9AAYa| :g߁Nr-cUøߛZ]FW߇02 UjiKؠ,oݻё`A6 :Z{KО\C(v_̥_:"QzVR\Yd0KHJY/5ZK9DŽRp6ȹMe:2,"Ah *[ rX+?Th$ 66忠9ٽ7))е9w]:6.Mn;maɺv’;IhżCUۤZmұKp>,ߠ@Wn s7rf OOR( -]m@ ӼViMǫ-!D!e!x0k5"2b=6MVHKDc=RIC4MU-zQ&H k.(qCvY`iI(JAHiR"Z34uћnihu=츨ڱ~-7w*P>FSASѨ7^%GkQ:E(brK%bdF/cL)lD֞qh3$%dj}AG:kzѸeVRD%$@tY$"R:" zH"88fং1 \YGa9AQN!7{ Ҙ&$VVƞi8XG!#1 )D][s+|Kpi\=&KvSy˅KC1EiE^oUi /$ISEָʲq|n#HtVp%TF u-{H둶OjbO[FUL`D ±~2Ct< MA4,Ϡ# cL *0U,`c2eN1HgK:|yΥb,bkN٣Vdɧw099 S{H1RW |1S$f)Гϓǫq3m@l>kv7Ndrt"fӠ=y7Nfl_nw$hdYOTKm,$RGg3 M"ȗЩ e(,~FT̤'d2NЙvS-<V"'Jږ>3W!VΖ|D7jHrO[vsyU38~L*^}l\א9+DenU.UVu"1kL;gNC@/m**-qW4P ޢS[r@fSCGʑ֝^ɳ^_7g_u:ԸޭW4K`oFX&B`_~7k ǩ<n@-Z]cOwS.Qetr- fkEiODR./=9kD!ˬRHnO)}.<\x!-ӗUWϲzmeד'+6ݾ2ҵ,l{u$1G/|?~ f2m@/ o4gЩԓ-w 9'*Õd6ڨxFL. W1K+)[N@:3K<p%cPZy3#rLQX7Zw6Ҟx{}ZA4gpkWsz/ny\8؜W:{VЇuC L "(ЊhAJ )G Z'%"}L*S®]g66`G"|vPޏX#f#7V(iv?mrdLD+0}[@ z u |{XÆdh,p&j7T=W]/C'vmr?]]s Z_uQ7o̅uΤ n_J7뫧O6T(ȏalRɒ~6hjMﳚ o|nJ_;.Ƽ͚*p#x*Pַ̹OF]MKs>yYe~|ezO͡z_W(Y.^?.=h- UxyCݭ3Qfxt xq dT[Cߧ*i_KOwTgݶKe,sa^B2WC ۂd0eΥ&:!J3G^ld%E)O' xLX)HDc=Z"~~Rb\{B0~cdK6]q ﺈ9{^[-xR{4 eP*4Y+v3Z1ipԒJ>+烯Ca \$zk ޒTH"! ,AeTΖfG E:Ec kP*Z2:I`A. -GI(pL$)D6J4-WULGd^"G R0dfLR ,PNFV/wU0q1vJ:Ĵk/3@T; Wpk B;!b,#=;t ֛V1#%NU7^/k;<08h4)P,jѽ{Ξsģ.wSoʯ>EDka%02 !Mr`z9[g4 }sO6@l.\W>+cƧqsۥoM=@sU xgEnum-fYlwx|YՃX{1ѮǂcIo+j"J7uDjm< ;8O}'+I{\*W˝.t]{ZbDuWISsO3:sN 3L|,qotˠ{|MbU:4+.ͮr:&hK[\^TFevXR&e2 \켊o^;uҒ.Nz+Z5|^l725OJ 7û;^vT>B %a<98mUpA%>}L(X_J|ɲc&mo^޹ ([O`3籫6i@Gu_6^uұ6h uj$W{hsjOKRL ~)l DL.w+'O|M>ة/o"Jh@bHI0kmB2h%hhd"$%źqL0Q[c8Ƥ}(%@q9!@px^Ɂڹ˫ug3,{)]U.Q|fy1y N̍FV^oS.'Rwb^, *9ØqܤGabс;F =r!w+Gpstل%)ՈTza2N&H)ޤbLB4*JRȒ'Qge/B;i!#PB$&2%hcԗ("oE47J杬jg-`6ҮYŇDwXRT)c6&,>)l&Id0k9BL6FH\%(! m[UBz]T*ra1N^eևDpp)+{UjŻQ'z6A(!1 =Rі=S> T^zNBHqRwĎwYv L73mt rPʄKiՠBM 90sm-"Է[lɩ|2懲2ek F+D )e sR࢔9 'Hzxԃǂ-r4:]!*CE#G)HYx["QN ɑM,8 3;^mF D`2#s8^dJA,Ҽ cYA8+\X?}44C_Z+':&Җh_۟kwwnΗ?jNpKmHgqRCgJ #-vOr~|Qel̇1Jŋ~mMu>}lEg ̕}8Σtn{%#`|=l;񟁱LwKVtՌnƹf/ڄv,Xxu>ѓ6Wû5{pou>ȮVƪ 풋V'+1RV|z}K\A6\Og畂\S5ie o.i9?O??G./?@O4wI(uuA |QK:鿽鿯{a BoUa2Lr 򰕓6,,!4HPi][9*&ZB0iaw滘-%HWrG}ڮrnJbŇr3Y#=DS[Mrf185|Jg FN.HQfa6aahd`*V HuꕤC[ ^Y=/sYPx91A2GBYm6j7 y'C*:'[mM|'|jHPWKG_V2V_?Alh庋RwC4^!@n` h}4( OTek봟KNW4x:/n=D鬲D^cVl9bo~2Vax: ")B&˭ TIьG!Y2y _ %lԏNNn`CeT;[dz!㯣>~]Ύl_ݻyFax\j! ;гeWFس##){cG$&&#ӏpXDPHE4ȉ*Ad *& %B ڞ%% wYfO΅s5%wT/$i۽_O/OmG.^jŰӃ]Mźej&;~<+5:Y7?{WqeJf0n[ a0c0GèX" .+AH#)nɾu.ԫS!}kl*@n#Gtxz%[F26޳\N6pNc~+Ed]V?Vg'6(&oZu\xhSSrN6~}ǽOD?m#) }v|||e/s6guj/.rv!3q0z=9klhɱESL>;(-wϿ#|o٫ N ׯߡ;kT7_dAkuq_Na0ڧ;§>d 9=/N.ϧ=pW;1'*ÃI͗@/mƂ^lօY, 9nV[GM[єJ+0]&99Qȃ Y>lK7h̼ p@+-ZF~ Ѹzuryvm#ޤUyOoL}](EӾj5 ڼWcĨDY[`mS z3&W& nm 9[ѓn`pgA6y~VKq,Q|+̧G|r&ѓ}V _K]-N+6j'u~]WѬKLݵ`-wt'͆G#`Dvmy{?-3yq ӢJ㻮Y>k\LooIKٸ8bkREKrުu^WOǬ}:$bq;GWG9ma)\j_lr߮uٛuU//l8J.2YB_HgEiJF9(E[ ɔ\ݪJpZ0d(R7FP?|9gk2$v3Nk2% 0b:0KHڹ Rڎ(9ٜx>}Z6ڔ{@'/^\RDc}V53dMIQ.Q.wFJIEKR+ VPRBu!w zΛk#i9Jj֍_6E5GIl1&e5b'ܭ˱'9f8:A1|1!8" Auٕ}u(7v%>,`,./V[Fh#2g,qVF\pC6trVvf4R뾣.+ +*g5VQ@SZ6f=ak;C (TD@5RhݻE:v0*a#ZEQL1,i쀟P`k:Ye; R5;i\eΤʆ4j0OqIK;6@F,=5 !dI#w|͡7w^KC bw쁦~K%P/i0(Y0(!24 U`_)୥nŧE nb( aLu݅Z780),QK"lL6 _gAv-VTR@hqOdhʑf  -YwB=/:k q i0&v Zq*Ζ./0R%0X(5Kĭyw˧{LJu[kX+f%4' nj1DUX5jBB%ʨF#vg؇:(NUpҗmpdUmr%ۈ٢Ƭ2Va1 'z> =!K-j,(tA\x@z+PMTLyME>b1w w4o etQ J+&F@x+ @g6[ VESWFb/!Qݙ:`V^֫C(NCkCtٵny|+"d*S8r+ x6*m,#z Х$bQnFBMwx6@&e34~^x0 hcEۚB`@ؚb R˩4ts \A727_Ht&e3:!mEAu0XR t.M(zDG〔 1zL>mlE_1h"^uN ~9w$EyciUO(aZT1tGQLFRu#x$w=,j*=661m|8ul̗؊-#5~IXv=ԵjרMg&= d **Zj PmA{[>Tm />5"RMbUHys*M֔7tejAy #VfpתiDF;,z%#p^{x6POQ,yJi.nIgPn i4ol/R-jUnZ.MD!c-|KdQeMM`$5en p\·.k^ ĝY~FzOW,"ε*BI>z= cvox-''DÃ$Xˬ]Qv @ּP7Lf+?pсRk3 dAG\ky7ycU[ɿs/ŕHU}ԣZ[ noN 8z9q+']+>yYvZnϔt=埏"G?o ee8~O8{|-Eo)zK[R-Eo)zK[R-Eo)zK[R-Eo)zK[R-Eo)zK[RX>-a#iK`V-Q^jE $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@@I %M~o@)~J Xi(>E% (D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@snJ %Mio@A +M%Ч%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QH@%(D $J QQnǼՔG=m7o7Կ]{o}zry|X> `3#\FkK2x.} ¥7^L>,ap0sd,ZR{b+rI+@?Oqsדտ}ׯ<=idzsx}k4k/WAVRk]skO-  }S_^M㕯E+:l>TKkU4#!;Дm;0Pmͺf^?!Q;^ *jŇ|vwsaTn]F15c/iRU~8(6jkȭlxfIBGB2dWex3)q#[iҷ0SիWhO[ х@u@~J0vd6΁Pnyi7q˱R; X $r!-)QaJ[3:vgB *ZE52}BkM\΅/.}6tӚrU]Eu5Rz mF1NC&UmȻ%Ѕ(K6Dڴ#?3ŘbhѸcZi.D*v k 2VS d(UfO9ZҢ4t?wwTi2!.7_=;bX~.JD("/?{6 8`ha 8<3vf,觭,Dɉs~$,JLYm]l\1up#C$K"d>(?9K>zHL7`K ryɄNx~GNxI2N}# Bb68 XP[]`UL)B{Y K- wt,4J7?W$|}oRp1}3}^EWù =Ҵ?mRz3XBrfRCƔϲZY^Ȇy}t0xphxTbKwQ s(Vj_A݃wC _o"&hQZ,^ׁ"z?NNKÝ&ʎX'~>:raa`""})',W Bp9T?',2x>,=ILK]@^iN`ҡ[*v;ԗJLej)qM21wʯNbqқVmgEh0 *]xo&~VC`_(/v}M}$_{^*Lh:va Ow}cƽAߖd}$&DŽbY$-w5x D:dRXQpQNr[P x5˓Nzi6S U/ƴ\k/-xZ3*b ʜ݀L%hCҲt1:-aufгI^6&s$}}i؅OT0lP]͝K?a_+=vP G0攷lrd\o\; (}И+DΜ)I(w (w#_srMy4U Y=lfrJ7O"^ ͵g2F\130C(]SͮMDk{=BuoRN5zAqvjCh$Fobw vG/`TN~0|jI6!h82|a8*`dGlŘdpmuM6=%Ma&#UϣK]ObF,=e?wN?vחutُoJޜou~g޾Ip`NH'G CzK6>{s{nʷJ7Ccp#|;[+#*BW5CZCa]UՊE.&D&fa6!Tޅfm +DGnq5(ʗ*Di!f|P.@E)&SZ+atMk$|P'I!DR|/іhDFI@ ;I-[(Nѷ_S$=ua]ᶛytN|"RoFkMMHKP:YɬWNO .hLgj-괲[7)ki 򗎓Dv)<]5/(ʟ fcƽ M0!gcˎ ;fg,C睡}IIQ,i rb)[ck\:\m)f./յ@'>Zo"}t17M6=q GkC gӟqV'v=DgзAz2k9rSz5ӖQ+iпW;~[>~h=% |0)"7'8 ƹ+kvA(|>a;TA/U`tY'7f})'_mq@RRIPjOaFӔ&6K(RCaϻm3oj[o3&)adYLխ-A=bU]$80zNPiX'"@'}37lxF]E[q88i!6Niو-qΚ@ai(H9T>R^n^8W8j5`얘"x)53}9ñl:j=ǛSWF{z#A@use%X:9yV{Fü 7ʷ& rșu&xtH;tz |v'5c*-#D(Rk "D1 08gUDQ*QAS 1fEudr\'9GΖymVEۚ.1[Z7Cw0trYW[eNP*}&MVXV"RqjDzETn+:H%a =K?,z I'  [ _˃,g`ru(`PwぼGyGZ(8-{k5:XXI:pe,i"e(tk-(ZY_ /`!| pmP)eeq3w 'uq|tHiۇWU?r+ce͞*uoN&}+eSh/gJ\csb!PP'>TNgG/fŸ\f V,LogeYt#gip*{Y[ ?=pNiި4='Kݖ!D!e!x0k5"2b=6MVHKDkloTh ?A*! b <* s~]`XZʩRRZH6ь} m|[p&yehusI?]nYT)f Po%U9oP2zEiPG Vp).}Ԗx(S1g&eɶ"k/8tb2i 4nklӫ-n WRAnAt: *PQk %I/P!`V0/2ZDJGt`ZD"rRVNtVIXBg `*S:$ rl-!\0Gd1޵m_T$@x>pexݒdeٱltsC*nr@؋# c>Ƙ1!T[ցk.$B(u!EWc-'MHvH]4#*%jWuGE<};f?Df2ٰχr7Ȁ)!!3BM`;vyӿoҫKP4 'f'gb%bϥAvBڬFZ_ߛcׄzX8Ր"ClKUa>lw|YHc]E*߷KB*"?wK.c˦_&{E7Kݫ IՊ=:^G.NhyfyN9rfhuTo A[0'JE#*Mő29{j R)$`$] Q@:E+L ]h);G6(_d>d=X&1GRK(j՛G&K5ַ;P6~'7j91Y0vN}Ꮷw~S<$839vњr((Ne:T~q(]s.=h5eφZ,O%c5~;7'oz>hQ| ag!S|`ZC3]ͺav>Tu`?iR*9ҟ\Q.&|Hf\'#DLZFI{bs~9Q[zֽXٹjj޿ ~4riEmDh{׽/M?uY\[z9zo{}ŎNl'Wy\v-mPin==lU$=?i}>tQ=|={U\4y|l[^\7+zc$sj@KmOq{|H|u{!ҵ>x=Gp Lrn0MΡǧ5oz8Zwl|4v,6 5R991Td-x1J1%u&QB@'q㐶|X̼݃Mo }\\Qy.(4_'s"+nME0}D9 1 K86d1[aWL4@n8;Ե;k n"TucdA-%ajER *!s5J!˜ˣt}Ci`)W_Y#t5 .D V٫z[R3<2C͜=-9v~s"-s_}[r+y=R6O0I2o L3ɹ!fb<ߥo,à ZN%[$F.ַL9q60w0/,67*$9_>U+&]u,sv#Upo.ڦ:W],Cmɳאcף 9〺 1`Z#Vj'@#LN1U הZ!X=lžQR ẔXct0 . זp ~6OS}nK1QQL2Tl2OD Џl,a5&OR-v*)O kz!ip+lb2g"DO"7 -Dt(tř OQ/Bzjvu7D}_J$*hbCA߬%hRR0֖Q/7 LDvCAAѸo*[&:@i5hfneUrSokێ2O֊ęJ$b/v{VS7m; IuVk t}:]P!kjJaR~zL=p|BמZKe6crEYE ݴܟ4T ;OفX5:)7! rhE)X:0a͇J&oQ n`bJpkAʵjrr)b&} 1YcBD%,- ZRJ`#ta- G&MH;$.aň5.ªals-Nl^n7bp\o¥vk=w1n%I _(/F ۠yzAb@0E?d;dF(>62 yP9w9|MPz}"&L,D٘ȽPnڬ ;6)/dޛ8jׄz8Ր"ClKU"mǙ aOR;u,s17Wr =-*:Һ.2]W_Wo_e}rC|sbOĉ-x8YJM294*VLM"9SAq߂9Q"=(z{Q~uislX͜=_R )`+SХEZĠ` e Oxd*@MOf~0<,CÌ%B#bbm5\::#R[Jt(WSf@eqL;'a Bh@)ې!I=BXc386Q](Mqal._OXW:~im[-Ain/tzE_ އ^ w6¡ רuCsSBvxS[e!m)vsIfuh|4;rG%$)`Ćddk0T,Bemҋ=B@6e%X<>C>X*7_rA/'>i?eo>o}uN}~ӫ732{!ϳ__| }} ݿŸlk:[҉RRYuJq Tvc\BFVC<Չ.V?eM,k'>&tFn:|8}OoȖ4@&Ѐ`qQu"[![|$:#+mdI}5FlJjkwL[1u>Ԍl=J-ZBؕdG3g* 26Hξ Y{|-Ly=E?ϴp_~Iϩ$"NL]G(2 r`<ջs; xW휰Ko_fy{&;.Xғk,vͽyN͉8/6(z0ŗ}Q:4k&`d36!2i27WԧI9_F5c3W *eFt3V[`Ol/->'[[bg熫mvG׫]x+ȭ-nYƿ]^75C9fxm=h{q;:}l__Os١Cmx_WIDn?ǼϜ߫:87uK}7򯨐 S.$$5ԭM !{Rhw[7yJ! pV{v.:A6fS`QG3gm]C)%FIF'Ȅ̖mU>b:lD٦m; CςԬ`TU & oe=S|j.]C-/E1tɰZ/|F)WwmI_C&va _%iR!)UϐEi$jIcDkꮮDŽ$V1姃.NVys+Z~mIkM#8\DRt{ 3E $N.؜ؐuVK峖!x70:H)߄P.y5QrƸF2 )R &Jӭe\AٮƱʩ8ГCQbSdS壅o B4d2+6LJ"B%MR*(K1cEԣz,A!j+5.^McvО&T%_˱9d%)Ds pDSY`͏c*kpLCRRqLB)abpL~SSK-,/g ,Z 6,TJ(Z$ɷ|l'W[]Ϻ츣ב7HD0@%Jr<ܤDb&Ik^Qfń燴3F_meXnŹB=K!P!4 EH^2.5XȁaP15(lNJ2kϔL),$m@xU4ZT()!S!8 =P+Z ȳ ]@-,QyMr H"Ι9%}^qkWέ`#x- KQ1rHBdR*M"mUDfDjlIS I+-HJ{/ _4$AlBUÄX{fV,9?UVɼT?ɋXpB5y QN%99ǠSOd?dIj$ZYqzBr"- o}>=>)v3 tqFgc>n=^sy?륺4\bƛe`v0Z~Ӡ~-77}H]Mop-:w!m-@fX{3ƴ6}$F)=QS_޽)n*qNܬ?l>Nǿ9͏?ͯ~_>R?7z`L%D@>G;M淖y~U?~0؟4 4z4oޘr d2 0 JssYџmh^wװX(9ixrI {er:ᵐOr: {{/NE| QTYetzsz5n'Mv𝂬*~紽{/abXwU xJ]\~n:܅^{4SN1+u-zugbJW\Zޫɮ1RK{\9uB%l\Ek:+wHVne tqޭ^=ʷ2~>+ɖeʏ@Lp^lYX~Յy p9>svE1SSdZ:h-5P7kz;cJ4d^ɑ9œNY䬳%I6E_??}v (/EM)ok@/;]Xu漳C4 %ڣbj:kMևHє=^bORj{ajrc6:U6:hH!MRj;ў,Ю>Bg|%4Ƚʊ*c\Զrh˽'~L楓K%oгd+?x-tD4y-1\f֎xJϼCRmpE[ ֔zAb"&ygLAʁA#9!ZN8k$:\[ z w\:~٨dwT ;.áGcMLF8Gn4oUMJ ƃ0y$ $" US9L`s*%=% EKO>=:69U l-w~(A +H?ȝ "tJT@A!T nV<'-,.B) :WH @S&HUؐ1W( GcTR5+mD%C7WY\~4YZA\e)%+4WFD./s!?n<ͦr5:Y ~wQu0k3'߿[o<6M;Bو<‰H&1Z|{6Zf6Lrq,f:K+ɡZU3-zy掑 Q=콉7X، A vTpY1# cfmh2";BĨC5_w?;ɍ@ssiM95C/sek֛|_^_]yVYg΢rÖ/?~ JP\Xdp*T^kfE4 nK q>Mpڧ+k&OaZ_+;_kmDw~.{߿gٖj}lhfԧT_Na>;TZ[ULZP218ɵ?N-~bD0/5)RZE PʈœJ( ?/spOg'ۯ1 ]ٖ ݋/t~&kiGW8?g?yChu?X[4O;buw/Phc"1M`,ruPHY"!(OY#Tb&P78.H)j {=`i"X H*,dJl>(ҶE ʰL[M%*E2>9.h-D<b\pJ'Sh_T)qWx>d;~Վ[{yywq ݇uM{i[VTş̧vKe AQ8s֢QjBHOXt:$NBd 6X[J\̗y2D)%OQg?YqQozHa}2qμ F_Xk% pg p'!(fJh6`d1t8fe⨃$QBÒz !uR0PB$^ȴYQ Bx:YtӱcFSBQ&I KY\p^CQRţ+ŭ:ֲiZimSӎZöT>o&Ѩ(tc=S`_#;uģ맓ݘGIs&0KٱdfiK28?9= 3EUdz۸fh^dxYbAB'Z4A͒SV#+xy93jXb8;zZm\ +*N`/IJlD9lDierT\yiI{x/+82-J1JsN8ի׽^\xGzn59*~#aӛeYNF+_LjPvU88 ?&Oɟ".JՋ<5f=5+L3`ه؆awbEq\"|Zq\wջCE1-Slt-Z9L$!Kby=DZËiaDǭ+%Kd(ɏ+)KFah.(]ƣhLb,JH c VdJ'$mܩ:!L}~ܫum-]7]wtȍU)2u6stud>e=.u=ܺ=.fh6f!,"?nZ[=_5yrWZnŷ\yߩB5x6oy{Ù0In:LbS@m̗ۑ RunZt2it8h'N/w;,7Ҥ8YgYD*wFKb!PBJ62W+!'zv4? շgeQ] :c}G;. +i2ɈHct!&I)FR4<)qǼEEqzxt2|CE9:Aϫ|z;jMʺu?V$r" {804ϗI$C=? a̟$XpD%3^PKQrH..j xTF-M1} ""5Kb)-HJ{/ 9(gu */b(,K7/3[ Em?.~j`vz6XF51#S+r#!bW;19vBscԴ.ͳ]uSWb$~hxq\튓O?2;TH ֏t6 kf^vq0AxR'\Gg&HL}TBzD";~ÛWz|?Ӈo}LxWOqfbm${ C疡n{ma w_^a5O#-ȕ(_)E#_7]-]-bA@{Ѱq ;r˸9AAןZh>_TK9Ү\Io m rn]&VSf-o&WY GG#Qq#OjE Pæ^qGDNhN-ImXH|v {:_YHR&\.i1ܦ@ttxCB;KPk/o&hB_trsP7ηj$_>~i◤6\v4Xb?/eQRKTA+{|pEJ{D$e8]y-,klMf>̿n@3} WT@*þE.o %p 8 &N Ѣ. &Q3Y`jDzMĴw5nݜ$wP?awc?Q-KJ}FC}ln/0ߧ  MY5ٍ4`VdY7 qC 403CMP3PL %*G9K M ڋ:]Pw VC [lNQ.\ aR㋖/|7ǯQ#s 2{`|:`ް{jAȠcS KzzuqtQw:RVn;0BeL7gT&SR x/ᶩSl0,jOCwKyWz`7,L\Ih+ 5Jcqmn 8/?3Ls*EL5.іΰM M9*.8Z̀uwnU:хvk_j_-4<l>xz7EUÁ#IgiֻD ܜw6rx04IP=;Q=D^k">DF: i0ZoھEh{^o:T^V=iCoX 9vujD(5<~)tB 'A=WVDTu @D颶 @8e}#W^.D@*]{7ZHQ{k!2J˨<$-gЩi΂5֠ 1PKwQygL@ X댜qnz\K 冏:g=uhT)pLL09um ¡bA$|'5uo_gv Ѫ#6C v^*| )R*>t:'%|srh(%Z [e;ai\K<(l.cnܼ0'c(; ~P`D>R_Ll۰ũ={+?/)H8FqT&sJFR}6T2z!'qV~ e=:vͭ]j2N͍H'­Auw ؇/q*ٽɒ3j59g TLfu S^=tu=ѫo=nˉk_E#_6C!8xe:$-pgC5E,W=áFȡE vWWWUWU׃^8]V@õ'99J}z7+BߕgSfwK}h7Y7Y Uv'jP5^&V/r hoٽ}(y> 0<̹P 'EMkꣂNU\ʈN4[LXYoq2G&u&VM?< pJq+ oճ?SC˔J<%"¯0M&J.=jvu|75}{VvG>bz?v,'tT]Q[6̠o3DK Yu%Z_$F{W6՞}5ln [(B-ze1-l3! !>| HkIjжvyagҳ0ւSv8|#mvGbv@ \vZXSAP PVQR3cĜEZIhBD_c*W_/J.o![` ~ |!mRctv~ l 哙d>ٽ,m@r!սܛ7ѧaCw昬"^^bY`kfQVjzq\B(`^d(v}AҵHT6SVn z*Dw' 1=I{7'% saWPCI }Rg䕙E@E>)\7yg8.g_b?hkWFRZؠ2(@fSҁ[Di:8 eٿ }NFAIMxp$ ww˳ WZxv{SJue?~VY9/ȭ&6z9YpR 3 ;լrK&Y 1 Pǣ&so=HcI'o:{IVz~aNo*4ü7L.'WΟFٴ?(M`Nq=\)p56:{Zz!Ka` ~n z60U,|Ľf~u޷}gB2j[+lTȵjILLӣCWwyzt(.|8P-tg.RNV+DHj,"w2P),(mǹq̚'ABT b&Fᥥ\hͨpq(s;j$bOZ3Ȩ!DHD"# F# A*8EFӖ5p䳂~ جTZC) 6,rsUy8Xث'V?SrqB<枈H=Ďd_ rFl{ᝧ;Ld@Ιc-T#1Dev>r!:U5UMI = &D"Ar)iJaAxAA`wqTĎk0o/x1Xh93)q@5APB5ZȂ!ŀ"&:ǰН/E6<FOZ#]|U 6 #pQ<`#K/HsH-qlIأ=X(c] wF!ri;ȷMBᕼxeU4uR_G_6DظGS T+ 2ntu5SɢgR}.~(}>JUeTK]6S!#A)dr1E2bM8o9ީWd(_vڙZwf ݸmbn}z; jE~Ɋplr'wvg[pXHƧwRPAZT9sK.P<"G]| |=2u K_>OFp ^ Pȉ#{]Čs\"BlXsfc ejρJh>`:.i3yHwD@4:Hd 3âҌ;$/`p SYr[>$Fɵqỳoa-,|eCr )#>]|krRI]WU Ri,٥z![ú3#"_z(b%C2')C 9Q=ES3Id :?VÀP& -@sH\>\x뻋 :|(( w 82>y$?N?BxضgտRtzu}Z(h̑eNu[SV_qN$݉qp|J*]vgMt,\uO^Ϧ `$fr̕Ae[oq[seٴh}@ 1f3%滋!h8MX^0 0NppS!zr;g?]8GYE6ڴV!AJ.fB 41hгIF1Kbp6*+TM)Ǚ__9 ^_^]<uqׯÉ'5׽po꿝0k^XX}8]]џTH릊(H(/B3"Wr2[?|&HB Bb[E϶yE[Xrü]W\|!ByWt?^ȑf/z7_耝6Gw5ƍ$BQ+(F[J\%C$hIjrEq򾥘'0:#K{*rPI(DަFkMMHKӘK#3 ۂ Au3t:\UNBk4$r'")WU]LT ̨O ̰~2>z Z`RЮ7XoPPVp rlahs~[X̵Q*g er+gI1i7 p|1`te@#$Y\e0ۖe#L u5oP]&ګ$+[b4Պ+]A+g 9ϨH^*$r {#;2J[F kլ 8$+Cz Y@SE%K0Ue5pS$AюoMT>xQ˕׏g&EUlh}vp쌹G2ĒL;cb2J9*x(/u\q c\1 sH[G‰uXT%%a&Oa,=8񢑰vɹhM٪#q_f=2z*8ʝ7\ 1qV{tHbaJ/}e\4^d,o!5\Bې2 )Op&P{YvۀBwM[_͛ӰA`YSOq!5[,LwmtfUL5\˟4i3=)AU̫ƗZn؟[)kp N%b1EXp@qi::7*sֵWkA,zzEp~X|RhӐ7˟o\":9}+ﬕ Oѓ㹒e&B!e!x0k52b=6MVV)v%7 ܪ=*'TmWe:*>ؾgclaFFKC :/2adz0$5IXSD|0ar{+%RИ٢qq:W_F';fA6 8N7EbC]շO49-bO Rê b4H{`+H#T[qB8u:PJ"8 mvq1w&Z"RS %qb'8>jK|#32x5sp׶dl ͒E-U7!A,mPC׫aq!9wzr3qP/_`04O_kBdOP fh/#>t$LEt!"\zq7;fA񤍖N|MzDY1Np4H`&$gz#q_)i3'-D5$ $y`kӎqU;I_l/e㺴H7*J")"qrֻI+e(.1$NȯX/H׹#yY=dG$  ˺OrI0ÀA!hUh{.Ka[z}Z :h5VB)wSZ!sVAk ΕRݞr RO%MtNS̳v ޿wq>׸nh#KfGh+-ro}rw'lkrZ†]+ N6beٿ~9L?kUʺՠi22o$ u<"j #D 8Yh<ކ:Ģ MУTqbzV쵪$^9[o 1KZc Ι sUqE .ȷ6bq(F15!U,4\yQVT&)hʦk\929^ǐk'/Vԣ%w= /{'޷B7[U* ȢEĄx>;)ơblo@w-mB;D_ֳ}$^6N5~]9Fo(,Z-K^:9P`Q㵽A6'(xB2ޤ1FW5@0x;`d]w(t[E2⼺S؝MHְ]o?#1ܑc`y5KƦV z{`ey```2NxzThڭyb&SYʼnZpbt},K6( S5D;`SJUY }v 9$0 Uq5a L Cn4r2x~_0=F/B&^-+|uO ;v |L[lPW|HR޵H4},Fp!S?k!Clmnd o/y2}Wc6⏏ë9deyL{+#Vdzy>\w=Mx~7sG,CN[ncy9ˡ<.\5, 7lc/IϹ}Vi%Щcbj^O஼~-d0S~n557QmcƀcRlɨҺÎ*~;k?N&w-|W~PO<_X ˦4Qućc7͓Žͺn5ivtlPPR4xՖ&yMle0Nד)E$LNӓyJ\ [(z}|(P:!!zD[?ׯi}+"="vg=gZAj٩zEGE'}v;KZ.y5ZҝZvRZ%t2|68G:ej.YNQ1Z2Y,ْXl{izVaz~J!a(Ll¢T|]gכգ78* -jrFξj |>h%Q&ł*(*èlL@Bs:Qɮ!&Ž"k;1hcv^#hlx{钲? ؊*f^Hų&M ҒB ' W&?.Ʀ|CRP/]r,@c\C!kɫ\#KV,7r.DO ~Ql30$u'l5+!j"&QL7RZMI[EąRD1ຌ푌5,r-]ڛ77:k.Hn"=fOBE0͞Sþ7{*J4{F4!+!>u5'`z5j jTvuՕzp$\qNn&^,YyvkrnI.f(VbÌ؃E +]ɟ/?l14j\G" ñ q`[vh_~6w&+ЪBſ;2ԻҢ^MȘL=%-4|"nwl񫗰'%6ui c\~4,g`5!Ch?eYv }ƾ]/ՂLAEc0)?D~LLh+5vn ?`o̎᧑ zӨu;2FJa/acuc[U{j|F.CQWZrwuՕ׬0hz'<=={duҧ'+W VX}ɺZ"s( 'nʀ@OR*YY?cҚT1@1+;I%KnK0iQ~0` yш6߶ߝK);}-o q%cQbW G[TN9k#IR35{KC%eQP|zRg z]$7"Zӡ,QerU|Mk%?Oh ѳU)8)PK9FUI.I Vy8 rNbPr*% (1j@{((nTyjgX0rь)S]QZ$>W.űZl4r6\}lΨbTz_iUjMSCet*'DŽ\u9%EdMj?.cŗKEu*P\dDۜ2%m\Q)9{[tthm8!f+YML4;~Rptr4QFD8P!$*%*XMAA)d]q8ejc}7y[ׅ‡l~#oĪUq xc:1y"2F"3* &  @ sLL)2GmКFOEFc Y.lVW_=U TJ9N8p V&]P1x\cWA:FJY:[ ~5d0r1-2!'(Ȕ_?*GJ@;<\x`f;`J07yp2U {'mn!m$(z]M9$pB E5S5FT-7X#11T1r/0)Ԭwk&'9|0*LYT5*)S cbqhr r4&1:~qB]r2AM:0Xc)yv̳Dj ÌPu*:򡜔\X7 h4+9BHNsfhѠkv՝t6;akcx\([\QXl,&] F+bŒ'\+:+A&wAY)eY:Vlզ$EǬt2?i6~.Z*E5NB*{?&\7{Ew.ߚ[RW|N2YƳڨ`hy/r'q^D@ᯓ[d}M~3~YR"BY_%m~ ,r2' .88,H3-EmX-Kv9j6+APD1uf: vxs<8C]6ڿJ`! H xIVK`d.~j40%1_}?c/ зjk).MeDk{~kꫬ RI5+hd6&?:HxW'}Fg#L~)n=SS)FZMZy_x{5;o.[f+ɹ0(guRۥIb4r56g7aau$[b|uK]͐flfy|a4`ǣMElpo}ȮVUH^&ZՑ0ĥ"|q9F_)k*0&YW6շ 7<0G?}ӷ?>Dw߿Ypis|{~Jwt4aW@V)Rrۑi0~uj9",Lyͳ8·?a]T[ZpM]"q_vUB>E3 6+TGhow`( _E܎tj*U]s?G%9layS !22ӁRm)qFdD"4 tAS6Ӟ;9QHM'55` rs q$6rf_9-$/sPHx9m\DlRz:NHX85ħ7]7GuzR*#\yo"t..ƓhC˔۽ ?a~vj[i䘭Tx1^T Q{JaZYo3O^{*utɉ U\H.}+h,![My8ekw^w[^poASȱ [~({"ك@F->.`47Ei"q=q"?s|=Z:Vr)C!2h  *TβRpo+)2-ꎎd~';Ӡ I,Ejܰ|E[k\ J]L?jfMfYoihtȳR͝|K*Ljd<)VԔ!0+CʘwvGy_7c7.mp`Wm .gmS_iդmg^Yt+b'HԸқkb:oݓr2nTll X$7$6RVKf}jT6.bo v mt Ik#p0ҍRYo,h<_Y(!n0@%.-2@IL#E={Xzrö5\(wp)BcF@1gz4‚S~ӭ9!LYɻXs 㢸koRtsu46ꘁJٍ[vܘ=v#an(퓞`PB#h8-v<6KgPn[Fx A'4ڦBIɆ#~(RdKJ ,m3Ej>}/i V@Oi!!<2S tur 8M(rnSԭ#"( -J))$2Qg1ةL9 )AH'i4:MyZFIؗ@N6O?8>AGu7G{3|+pp1l%$h}0&a;q)y;v{;)[@E%@$K68ƕU87Տ^IG%xP-Å06$½U}]pu'ϻU`*Xs.>֗_\zQW2mjmrL]MFw?撫IOTlޡ^]8ޓFcq3E.Oz-^`eL`3}wݙ;w`AGB3}wFI;wgLߝ3}w~POa2}wu;wgLߝ3}}4gL3}wݙ;wgLߝ3}wΑc#K c;G;wgLߝ3}w?I@*wgԙWwgLٝ)3ewΔݙ;SvgLN)3e!Sv62k E϶0ӏgL?3akD~:I <^M\@k)MTvOH!"<Y/c: m03Ľ\QbHM)uexO@{ XƠȳu7q>ɚr8B RL&Ur0h8h)(b < g)&44BBٚ.23#Vq$\@KeV8eZ3"\B6x@hL$$UXF:,)W(7#uty1z$Xsqyi) ę utl\J!oC/S~ m5AybOBbNvvhtMhྖr.`X9\kmF0_( Cv2HN;iFK<_Ւe[-KrKj=DEvUXՠ"&(t+NBakͬU]*񗇜R21z|E!1ZwW rRl6eA 1.lmJ;@&'0P. kp; $Rp]ؐ F)x:;EپHo}YYn댮Fs0>2kq,YZFQwfi{(K-n0bQoH]@pBR/:c9ƖSk0{}];\JKb4E`:(SN@kDX V[0"5B$.}b"였fgBL l"h8* Y@o6Ec<}*:zԮ`j]n# %grhaiY̕]%p_Ԙ}&'^mT123ʕIsAȢf Cr  -`6F$zj6y vb';eb{oHJ5Q; þyq݀D-z0^G&P$_S4ӕim#ѢFe3yE/@Ai:aL4n69c;y4[ragILFUQڄњc 3ۆ6D.~q)_+ ]^? ?>e.[֫tf?իYv- l9]UOw^ B+7tujݜgWx|A}0Ys֜Mtlxm^OA+Kϭs44,! 2l(cK *LBRx|VYU dͣI0 YhaVǨENhM^ 䛽R+@䤎GCaLR8Rq mӷ"gʅCt%7r%~Bklϸ ٢˖OA~)ntCr#MU!cęq%;%1e !%ȕtԐq6&Z59Ip'g%\1/$r\HcKA0+9:%NCfKq& ȑj[EX D2i, u1*wi>[#gM9+ QG?n 9N0>bA'I w _zЉFA{OcdbR'MKJQ DPS"> FPj$(͇n"Ť (&$!egj5fjɠ:?i6av *R%FG yparKqޣdhDq-u-WpW=>k~^gZe7!9! @(g&Jw Zf!'&J2 m7ױȩmXd[ v@4dDҁd1E D`e):pcޢx4'sc{hCְtA}6)&ǏX9d%(QjLI0ң1eXjLeji{L ]WXc =t75U?JK_6/g 8]heB %H-wrѷ|lr>YwT:Fa\InC"HZ9i2 q5Oe6LxՂ|ަ1jF5>, nŷ#,@O$z'!9fɝeSjEYŃckP5Q"#{%xddJ <x`!irHEA,0Hiq㴸u<8v[ZºVk2A-D~-,QyMr H"Ι9%} ߾Z I#x6Z@.YFF"Mh5$R7Zd RÂQ B7#RH҂$%q/̓1 b<*(G5Lk*Y9?Fgyx<ȩ9gƧ!l-:Ü~}>OWsb0 0ʡ,8ha>89T?A,8G'(BvrEN);K7z2<STjI䍣He0~cjYd(!G~̫wpu1Ɋ1R?9FV:dݨus |qg2>]8F)=?=N%=߿@v ?o/3D| $YhV3/+^zX}?ʧo7_~iYAyzmT9ild2 0 Rw9={nAmjApf 5, 4,*6xz\ Lfܿ?נ/CB4bs=Ro] P_~Ϯz?zNy+NS;/D#q#OjE*¦^qGDNhNs-C9I6̴K o γ4}]s᤭1oPGsI}%u&4ݬaݡN#:O 2wWǾoWu閇IDs4=. gt;? /倥8RA!EK P(:n,NwQ^~i-<+REGzk)f_fP\O.Bo󯙀 ri_"*'xHڅhTh,h5T"=&b6Y=x^IHs(Q͵˶(g3ե]h~_MfVB>yiŴ5K{Kqy!|vf QTYetrJYm &o<i={~ʷczIJOR(#Wb['$sdv gIh\X:سPeS K{5)6R6ig'ݶ ejnZϭrmrBۮ+6FWtcW,b$Tng:cm%b̪I*SRdjhkCwmz[RS6~q`Y#sƼӬYg"N:k[Z秫7 ]XQawPँ+v ]Pܜw6r`h8D{v({ Dho}4Ҿ3s`rKL 67[/f86^՝ͦV~)Tǩ]aR`#B6 q4ƙ\)8SEۑ3FvƯX Q3B%%G2rvuީWМ#RWHɕXUUr өצ$ G.^B~T\T Ig o#L3{M_~׻MM&z((~GeZA18Dp$v3y5OD7S|Ē{c; ne8Sdi$\ o4noχdz 6 jެ?wEs}UL98 1m6ƟWW?c*lQ [_~ JP\Xdp ^kfE4 x= p(c#:pTQ+alͨ]6НRh}L 6{72LRVXmVWJf:u CB-ǯ~U<;//lrς(2BZP. G@¾-&% 8A3SY5;($n\LZ 2r DށJ1Acwɳు9E)ߣht5_|p] O ae~7fboPǜ߽}*_{X|K ^emlpKOQJRK9pθBպ  @8eQP",fQɖAe={fACF i l%K eT Vzj3h4W.\qFkJ=꟠ 1PKwQy*F8Q #U!2k8W1r=xZd *9,73b͐&g^ϗ ijGlSϞLUhzL #BjGn ΌR6oU6gCƃ0y$D-<1 <0gXI :1Rg-cm٘@1@(A=>XI54rg2+ P)1Z6GU/r,F~/B]t8PE&.qOL?}݇u~l~>k3Wo)>nnͧ?l{,PXXlq/'9WCfy÷,pWa*Oxղ-Zxkw SVd$s۸\x]`(.Ӛ`+u ;uω~}E_|o/eN2H4OFX·t6@,\T>$D/.e 빱1$^I@e$gSG' 離tjQK_SZ9|tvߒ;Pk&ؙ>2}AӸsiʝ`hX٭ A9$B8o #m*\$Q$oO3vT숵m]#֐w(ZĝES(/#MI*M9[Z0FIJɁ"`ٻ6[W:#}sU8l/ٜK\n4`3)HyzHb, EWI8nL%wnd|uCǣ|.[WGM]_yH^Ӱ6IR)S%:RCfޥc,̞j)c*y(m *w*Xr)5 B҅#ȔRc }"T*,]Y߀_y]k/Vv_[n۳Ʀ vhYɲ Jh{b+5j: 1;LZtv"g]YJnw`H ztveGgמ5Eή\SU$\XT4Qb1{%RZ9JUUq ǡ[8mh7$̺>~iz1Z`- [/"ޓEZ&yyQP{9s[ld"jиQ٪W_[tB/yM22wl%DT h[*R ZS`m%'+>h}_tca֗.+LbƜ1G%j dˉs WZc @͚aSAdTE{ NHn?U5`(TġS"gһ>mm_-BO:Hd*Y¢os0wt[B#mmf+^Ӥ%=V:: 4Y`(&Q^W9 LR} \+?e6z#F*bj!rKZ3Fv͸EՅ~`]F]W}{kb7~.#_ ]>==t2ۆ=AeK14i}@C &TLl $H0ġOp"g.gzø<=w\*P] 9᣺l~.QF-c)j|1ȁՉ}h&pR F O^KQ&[-!dl(iKI{",1[WQ[mqsy<"mg{`% DXW y{MX*ĬPtRR KTیtW/"YvV~,~UOװ7PuX^ =鍍V툴l[3u륜i_`X `]T>v@vŚyL(:LuHoN,uH>wh}.bUe*bDD*J %B h@ * Ƕ:[B VUf@4I5.ƻl2+7E;pqĻxqvXv3t?F 遥i>M{dqFͽsv_̇?3FU ]ʘ)3ShU)\ ES0Kо:q "KvH+|@l7**,| 2o}'Î^.褠bþvZ)Rm8˜z(UUUUqQBXŦ Y0_5KZm%2Q5aO1H!2~%cJ=z*:Џٰr2r~w\~y; m!Ύ Bno}oww7/9m tR6f?ɝPiy|}e 0;7C~ս!<ԙ[|"_X|kw㩛a5Wu{MWmy~8Xj/_6{Bԃ=\n"**.> }~RtF E;OL:>ݕHg2AdRk]&h*dKYkl]*1V;lllRձȱhҥj uYc]ͩTjA}3Zˁn/!1<_syy3XV\4^ W}!S~3L ` @̈WW]p6W{Ak)]tO|-:O<ƪ2 ՘:uZݕ]~KgadC*4VŨFY΢"xgBFݝn(rPbrN {@HԪLJ_(OqC1E1et6'v-TXAJbtF;  lq#ۇ}~?6֡ ݁{r}A=m ^Na!WjC1,T<;~z 7z596C78?0R Nc)2XUtɎPo҂hZH˭4J[ (}gcŤ;V1iV:Z{4<XLz8hCꬭε`*8P{鳇pB=Ew!66x4JʨbJ *x_\4,}^ 8u(} @VGRzRɘ`֡8(} C>Cɷo2a|_ .:5ؚ6ů:6ĭ'-5dT[R*d2g2+!2J` h8QNdP [HTm~Q)Bf5|J`=FpFYЊ []g5>)-L:B+9ah5BRB v#Ge&{l0r(g\ YϨg61\>3KXecYvdK_"kQ9cvx6x!EӷKE=Evuu [+嚜Cs8REReZ8?Nz0t2H(#R"V1S&6>gRTj5/Ր6k(,Hx=*Anjx0xde`zoG٦1rH\ ֠p yk-9@d`ҠLfD>="Bu١`,X ߪCkNja#R(UE!e`%XWGNi!T#W"8ʳ.;@]LvѱBc`4fxWǾ~] &/j=@jɥ f\Yhcc4#{CuaS0ZV:ՑCr5@ɚrQޥl_0"k YsQjN[1d`%[1!'g_+/[7&x2!8!j1CJ'MYuMDa䟁Eى zGSZdhnޅAd=٦A-~!FFŊLu#p౎?+;RٙZ䕫(0 F8X@8a%Q ,H} W[Qf0:!:}9Jjm͚ة$ }\v>k.Cj4^gyu7g;Y+`OdO>铫u\&xղO4_i7v31m/E+Qօ*xs!|_:!SJDH[e;1xg3XN??iVG(pc*:RT2 %*Ymo黷h:ok\?rze~t'yHGm9^^˽$q2?)W}x M\ЯO.^èl›[ϴhq{s1ʯ+?~_~&0!\1>t7DGӳϗ>9ϠԺ?uӘӤvmYd*]lzyг9'' g3j󬫇\7gAլ{J+i;a]`rFGg|Ɖ>GgL}wPl2]\>{a?:zcoyM|_d<#=>L?&xo7G?_ôe2[&[Z/S;⨮D((62ru P "Z"b7DzG^3_7}=/~ M;]?bOv S* ]O{W"h?K9k,R|L1آ\uE4ImXiL {:_A禲lkhRN)Z5Q1 vAH(ڏ]mo9+Bt—"Y 0 pΗ&(Wq߯jٲ7˔%% MlV)\LYoT9jc=dk PooC\/&~I^ KJwgwt3G` !*րT*:ht3xp)?AD$u_?]54|(խwL'޶jbR@g:c`V> z ;ҽu}bzna$N3چݷť?Pb}wxZ\;x0~ypկ\oLfo@eX[zv@qp;މz5 gI7Ϳ~ʠUŘ".K1WX&Ĺ"͕F<=b_*jv)Hk\);4W sUy1檈{9`Vs7WEJh_YξL;ZIC/5d{I`z޲/ɗ ʔF~(( |Y)rNSH]{E4J"7k.ƍ-Zy zCZ{7vegcG*n(7yAn7DHH/L4LW=`o?~az8}G57 /вwF>ݖ)/L3]5x)fHkչiR,9fʹ۷dj|8Z}0s'zO Q ]pO r=݇d62c >3Z޼1C/r6eCZ~ťf1T P?hߞ6(NIdZR4JbC 9+WWWGR@],A7ܿQx ,_Fz fH)}ƺ&r rƱ5yUNPrh5Cn 0xm>& Wh:9Kw/4 }K:mCԇyV)Sz` ]Cj \lk*[=v--;o+a0Js2"H hP'tb yr)js%o"灁hM;OQ3usp}jA] }[R)VMn6i9Wc>dؕ❓cO->3HhXbQ P=Ix^z6Ɓu+r5Am%=D^T!:eF yLHHU]Jq!rAZ"Ju \g@"b{Cц\5qv~=Bga_ӡa+|D ^'(*g>,c#GףK'f&n0VA˚A-|p.7>r"dà#g]:Q JQ[$;x 4oR`K3 DJ&ʗu0"ģ%:8NUuN=ZImX]ڹj]Cw4ClckPcM($T.UVkHN1rjIW;{d9TdUplA`N$B**kp5qvkZ t2 .VxƉwm)Zr_{',5p[?y޵/,dT۫=: 4{h毆%"qYd.s.{gkㅮhw_tOx9%Yt4S9'oѹ0#Ǡ3̹ql&ӌ $R` f皔"ېTFI+^~ɉCOzcTP7a}8U4}UeʝyM˖KwOeԩ6Nͨ$x> 7с4&-Y7V3Wf[Sgr=)aUhbJ$ p` *FN;[F&,)ehrٓ)G, ӓT&fEt)bL( \m2nG)'FơPWGO}x~]V-7d[l7)7W;.h}/ 4n4| goblg¦@0ʂ !KSF%dA6]`{!E1RkFh2Y6 !e %n'\ծ6:ڦ6l S6.xT4"Ɓ !3 Z$A#AYEMƹkTO/zIױ^EΘ6@PR 6(> $472a6 0б 7k155JqǍ.r[skvsށ"~wJt`r@Nykqל8eJzxH޿|8uu WG#)0ŀa}4u}hz,`:ʕɶN3Ii}K6~^mn_+HdHN6^ h΄ߒU/xHW*<pAj 9 s'h}Fp09t`Ue dC`Rf m4F e@I.3µ+H1Hpj&%vy>8f=<rO|Oj͸w7jӓŸ_{}&>}UcAY\2) 3),q^!A g4 %fҀQ@長vt?ߵMo!s~]o{4-'őO鋯AMkdR`tPyrm goI rxPJ/ESwg;*+ď~{oRɆJ.#v[>ʻ=j^)}<-ngo>c~~Գ|G#xIVZ]]Cse݄rS/Rߗy|8SϢwXs}R+U/dro|5f9T p.pn ”QtٹlE*qު t^RWhfgY{6+]0,rll[M ,60F\Qv쟿gHJmQ%ʢcmyyƺDc #ȘhGPKX $PH踵Ȟ:DcRt^ߏ~&m+9W8zw܁P&X ˧al)<@rFϵ,wJÃ&l(USR2xiZxM_fm1{xWy~YQګTR}~?\|2_|KgVRzh|n, vLmv>ڵS&sv,/+ S`""HY/rI ɢkfFrDI(>wn:C]D[JVi`M y'bLQpOg' 袴6]*DcP( N1NeNrGt$RM=7(* %B# b)["Y:Rj89h*;YklHg%]}ȬD*D]9E.; XثDS+c*W!NI‡ؓ@$pͨ7Ӈx]wr09Z){1ւIIe8x Q\^jMrao'v= F-+ P  &D"Ar`)iJ n< PAPP2=hqazuٕ~ǭĖ@oX׀7Nq-e1l-z0%&JF+Y0İ0CDeخ&!OQE]!Pd[-߬0& #pQ<<`#K`9Jc8y̕A5 ٖ:HA^Hՙas8C>=(RH(>@&8xF=Q`鉚%DM2 JN]/Se#r">gEG[3xx+DΜK&Q˵^2r ~AĽkdt8*5>"=JZS6dS4:ՑyN?4@>ʿ#{]Čs\"|Bl:Xsfc ejl[z?bMQ ;9./Po%P0 *:mqkt< KH02 Ye<:?KF>||f` '_MviJ&R(_۫٧z /TM5Nm4 1D71 \ #LnըlOٝgGgt"KWo՝W7`$fr̅s0jo'Q6~o; W dJ/_iҼ֍,"G ZF?]LR|<6]sp1x8իiզg|"ʇ9*B6Yb'g*?= V F .?:ٻ7o߿K??>ׇߟ~ uߜ}|$_` M Am'._z"c)'ghK 4"$B| ;I-[(N޷So$=a]:%8vhG!")֚Ԫ)X\ɬWNo &hhrөtp;IWh(zw> iAo˥֦P鲢)B?ǿ._#jR rD5<+7Gxu-9p-0*gzi5~Rޥ.8NAR  ?fi ԃ5SǛi`.j=n|=Z}KQk!.9M@4r+gY.`+){j:>u!-ӭe@DKc5oSB5]uK1WBPbahdqj4Ս#̭R93 &!:n-6$TUz^e5/޲Իd_ycR`k)9r[p j/6$Qs-9˝R 3ȱoQ`]zt:,'n-xP!T_)b+t@Hm_jRz\Tkn VUwb*xΝB9+7ېPseSW`>Yv۰%ޗMz7}NWJT37pcmOg_:#ԂrVej-DiRHZz Gφn\bӺl훭hp`5wH懲~ϳg tIR+߂O|zU:ҡ| GWGUQ/p)>^E؊\xf'? iq5 K-Bvrџ@%dvi ܉꿥9ztZG?Ԗ+\(CB2u{6eZk-7bZ?Mo"w 5}RWePMVM~Zl;[`fw#/|gz5=7|*jbe^c*6`cv3AM|'YmmqdҺ[|I)Jeq| >Er8`rU$!8ʈ+>=e}s>E!O1P޵B%8@F{W 0w/NvXoB_%)RdkS=CR!))0% kzUuuU4@s>j5ITIQ4h+NMR4 %PL AZp4RSF%28O@YaĠx*S,F6Mj|LD[{͙t%?*@)jSqx% uU¢#:!]ʕcoB|ۉo%(!IDdV@tm"MRN&{A2-/[TO;h->(& rCz !1͢0j'C9EJQfzLxCh_ )oYjYBCGTh2cm/W(' (;f9zy8ʝEFOPBJ62בk!':-&qIp'ghv.| ))!k9z%NC<.Kq ȑ Q$rϨ!HgI"҇1j@9bl)g\ټFͭN%yd;а ipO/ (|d,PJ,CD~-?S[CSN2D|]UK)ӑxoc`$F ڜ*SL`]@q0!I:U j:?>*RvĔr*y6.Y"m`QZn) =JPSEQ+;U.cnx3L{F['R-DrRjy䚨8cY!h)RAM L,AnƱ;EOEȮG Qwmk%R_Qf6١ft:!DJA;q#aT~6&ރg]k.!Fڜy[fz9;f FS\Oa/.~nDhʹno18|-iI$ꑮچa:2C q8`lG|'c5H1~CJ\:[g\FoϷ(8'7j{o**67=߿Dv|~xwNwoC'iQ $,sl迿e>{aon>}7 'M\WAUy 8 40 RwTrOz~v iBp[ #JlW-tGno5(P`BdGAm1JNIckqxI!qhq#OĂ&ӈit:'饣 srniOǝ蜅T'm#Pя%)PWӡ YúE":G[']v\GV.0KKReEGD@RXHxT?ƉJ`W:V]=UxG^Q]G,NQ^A&"+Z΢#3}쳃K }z1Aȴn<:p28E, L!3Y)<k1O"=z_O rQ-ˮ3'B?ziW޽+/O1m ̒.䦓~Jd흀<i{סo'vIxwE Yi6t%ߝdN0;<`>; m V'/,kw|2QCZf'MD=ɤ}y,dmK,ӳVWTjwZܣxv(n:u]Cu3:3MJ-3A.3 3Ľqë )zܿj9p$:SdZ&h~o^sVʤaYln˞[14gZiSbl~=BG; l}>K8=(v0GM5̒d7'_C~6;W|/yg)^w#X y\mj(ϣ<@̳8E)S#ey5`-g]{@O z^5՟g !< ۈˡ[%Kko3ɋi<~k0vu9TuW57?6_lqF;ӛp(3{MM練PU)'=f z<3W}ƫx{jW_4ϒGǃE5&B IQ)#T%Q6U.(7'ym{Ȏ^utiUVoZ;ʠ!;+^E@9lRfDy:=–uhVhцU((p][yp(7qt28 KYDŸNDNU; |]+-VZHIt|w=귩,Z~}S'uz[o,i}d-e>?DʛWmH":rc#k3B@*&4E5BHOZ4 UOAꐌc) cP85c1r6k( g Ua](;]ӛl~ל [o;Yvo.`08O_NIJ1=ZLK#1*7.&@\"mH!_m"Y402ٙff6LQ$%'9baٱ,nҸ Ď)HV t ;nk 2T'% M06 U.A+ ]FgJEDK]tv&㴦\ ;ʵm\\`o 6:`1dŦ@qEk/hBhc} fP"ke^FeEѰ<N<l}׾J2ve{ww1G# fH a,ښ5tؚU,  FZ#+ [ijM=vDռWˢZ,nkyvޑijr$(iLk =iGW$ТJ-0iFiq@L`UL!wvUgWϐ]YAxP=vUPUpUrWϮ ")I+I<vUUt(슪wvU\3ٳîjLSĮ` .ࡰ Cǽ?P9+J\k۠͛ۦ*7‡'kW |@5 Bh80OQC_gټi,)ҝH}F!շO> GDE0}uyPI|A,L/Ͼ}q;=dTD}EAd9c:BIMDTř rr_=J*&,,9,2hhCHTbwDnM`Y[vܧ{]QYC얮]w x䍠M'CUS_nG m.ͭnοnos<ͱay5e-Qۻw{~E/z^ky3>X[Ŝwrb]"E[:^w(qWf]r˦ u66Z~<}KH F !nZH]m(-Pu۞rU3z)hY[-JF&i/o0'2f:c;7#½go1{j7ԙeEwhy8{?8N3ǥ3ۚ]Go"|{ߴua<F |IQ!^s}"[v?^)­JhWthtv`:a+O~ykIҦ_toa+|!a |cPNuakN>w^nx&dAPeɨ(C\R3Ph“GKxbSBH2UI9钵[ԠMsG3p;10O< {䡀IX>m0911u2cŜdY;k>{+BYl)5 P(3fI FhmV;o'ĻvEٴ>ףz-D=3Hڪ\b/7ȷ~DT7u41J扮H`AZR:k} D?AwY jheXfȪeL (,B41ih!FYcI'veE9&D 2@f])eUF'Yv _]BuvQ}N||=|kBz(*hă3$V)bPbgJȰ>&uҦTmkt"BOQnRY5(c> k9' %RLqU?j%X3ҙŃU9Kšz Uffk`X*%}*%>3c: 鬥?d"Mʯ%>QjJ@k$LT$q,HEfbZ$uR'.Iӟz>1Eޤ@("F%t-Ek|]r!KEӋZZq6^OzĀx8{yG 3TE,%PP>=xȔF1ͳ{ry+> N\r9Dޔzŏ'N3 %]HYKچB5ـRT0{dB#Jz[f,77z/ܕdķ!Y\qP9(I(*i7^JgłN=ytG+alUj3 +uC=G#я6{&MdU᧷itHM>Lk:>Qի9x.i0FhȩXmѐ."BP:AP+nw3#z<G3N#X>ytQJX&|8Aacyg$^>峽П;ް-cY(CT%%b5BT#T>V:^!*0w^MdZ&RTdey8e1bIButdk~,gBtfۣ9AVP9h|h>o#e̔A EYT"իţ51$3ߝV) [\Eb-HWXs, .!aqZ[kdhzBꘐbduD2,iQKg_\p OE_>;[?sgD|PW>oN?_󟕽5ջǼǟ[&5v'5qdO8N~ȃ=_xJl3(ͭ@ "Ym_Ӛ{ $"P SIHPG+E !4 /G3GyŻ_2"{d*9,a/ KZpAT+p^a|߽mE9)͛4)Z?/S2Em.?-kE}ţrJq<MNxe_OVV]Yω||䳷bT[-9-Zͫ?ؾ|~`v 0V]>ڂX5#`<|>k3μd[-(E!6d7>nFmfa֑5mI˟x*S|2}Y.b)yrJcePG ^+ !=H#9  k5+ rX&z^oְSW SM6eR[gq;|b]gڶqU'=nvI=H1]h"vO"쒾5]ftW=T̘ OL|@œ9{l5']2z!bʼn{&)Mj%o-+Ŗ9i»bjl<͆-%;m];ƺ;]#%H͆v@y7^lq؟6J4ȍry8&Xdb 2,փR`bnL_fM` )è&+1U4+q"[T,SiCuA+\ RACʠUS=~R'? ƾקΘϗ^`t> vt0tт4Tw -wrM~WX-=$ރ!ϖ7/DʠmڑZr"&<ER#hM7iI׀^x2Ȣ#9PVt`79"[3ry@z'2nǩ[QȾW6߾f^Zg;V8xGi1kvh";4 >/s/#ɮ$h2*&P^ $C]SVckkf:#@HIq֢/IG!t)@ i.ٹu{9!cYUSֹ]^G|ѷS.j3M'>F͋>)^dMncXwTj+&BPd'|?? 5dݽ3j+?olw='X;缴bm.y]hxbNuٙN!] Quh%/Z//j/dh9-')jUjacȘER$R:raO4~icڄ},1Fݾ0I$􈘊Y'ti{ٌٯZ舲_fki/GC+^^9q)[zy|5nYBm(]$A$Rу\pDםVl_}vhZ%KPj[h xc!;,5&B[qo-pLγl;Ɗy˭Vf]<]/z 򽽫1_\Eo`!<*u "CՁXJCRx )*6~D[RZk! y_"X0),RL#c.I܌xz~~Y֍Y{w߆%]SA冸b,Y!#B{=a7CwfТp)ЖKm($B¡ ׹'ӼsXl<|vJl"+ٲL:=ilL 2-FY&zMvDȚP`DrcN!F!IUDKu&Zm6`^+o ~I+L),&ʜL)R.+Lv/p^ o,'25LA%k͍Fp3ZW"gibA0|/+m  I7l*YEw_f, eP/M^rkJ}ْ[6wCF, YH#  Y[ +d.>`UN(/؄7Ռ]ɹD?`.Z*V=0qr@pm-lx֚95f/ g mc]h]VW5G`YZ_ ]Ͽf\cSXK9PT oæ{1ݯA]!+cTZUKϮFOmITDrz95v<;hfܱvn,uV1%h.zW\H5+2$S 0'1BhXaPGIc]I$Ha8Bd~Dr5v9a/Vx41}+U#bcF4&'-Ǧ]V! Uq񎭾Ԣd}-Ħѣ:ؘ hugRL1!%+&L#iRT+5ֈٮ#t)d%G- ܛۃPl3k97yeF c;bj*o-Z9.圂_F47@ו| ?>OɟW?Oz>ywY^Me{˟E*]^FSAD,=eVhe)6嬷Y`n 0ÐjD9TU"{LBa?wnQy,X²mCEnXh4yQz>__zEE-zsy5`ǟ<]MC}QEmo sj"bz~zy:Zz^~\)wsn}q?]wZgA Qځ ƴdlV-ڪTmhU;Ѳg݈*w5&`&(  EPؚ#hI섳6dT,$F9(iFH"2HJch&/W)/"Q9f<:ʺSNؾZ[U =K۔q+j-uC_rP噧Du^ho/P5 = b STA%UGVx'\J RNQzA8Hn36ޟrv†F !RlvQ=h3Ih"ă(A;B2#DeN+$`Z*g$rY3r(g\l9@vא'5J4JjeFf,lILE) T&+[T2@K~/PO]L&~]mNYr$R썔ނ$Ct3 Y|.jj' qJxb)a  T AmXDᵮ# LȊLA)dh8jx3uE[z/r{O1 w!B)I%b`Ub!1x(xIPRilaڮĿ;~GS֌Ev>X`$2_+5?5Ds% 9%\%KQ&NYZ]1\}VˆtRNIE1((EㄨIRn8MI-tJ%]^4grƪjhrsFķS+%($Q ,d c{ 33gZ][\NAX :|IKj,؄: -aΨfT sIXΦ?'oN֦TX.fU7TKNm>I7F|V=kr GRAI 8'GlY -у@fb >J@&B}?sPNlF_׏47ZAJQ$@qI8G‚Zoxү}ٻ6$W˔GeX{֍n4g0`F2%),d)SIJVFeED~A89׿>axZ=^.|Dk{kqV zu2pqQ#p[|5\G/F|~K'Juɻ0<3-FËl]uίm/|w9;[|fA`V|🆣|><=tvI4G?]%>:}CZsƺf23{]ӈifY~#M8VbǣBO?90mx8Nͳ.k֮gՆjiɾǥ_ƓXn}988/`~0Ksg/F̢?W޾sTlQ gQd_%yK88 ) B}5 ďNrr(ҺK:>T*Lc=^zr.q$itʩ#O ikhVkF(||z(A켒(]Bc{.% c"$!V[-Q􊬜 dId$0YI0c1P (SyAr \(E}C5r+s@ ü"aewm/%5RF ϥ(!4`C {ȽljUp̓kVrU8q'I)CHB e ,!\$FRآjUZl1`ҫ |UT51Yj=fFԮʸuv)9떡TRLSV=kR'?h mʶR\!K 7[0 XFrzݑu﬜LٝvAdztVY-:ڗ=sR+XE()BuTEwK=k90u!gАy %R-trpa ls[ :Q63 <*9~T-ci}l6I?j }vm{ttt97 hq Z٘\_GUM^V@I*# rJk BAGI@IC}V3k\j`k}HN5,Fbhm&G\$9sY& qsȜ:! 13Z%~=jjR&|<`Fʞ;`p۴ȭvthq*)~q_+#至Ѕȝ uC=2 \?_2:Gmk9lxe[Gx{w '|Ŭp۴.8i|*dt# xhXAG!/X//h/Ǡ4$3L '\*&bS&$TT )yL,+2p!k_z}l^{ϡR"]Yw#ϯǓT^bv^^?_w-C^>y맿]fZ!}~OFZ?}/|N>1_jQ)έqFrR$,4`+gv|-yj^y!D -.X%7qWعYO{ȫeB>=Ha>?׫on _-Z /jyu07F;h:Wḥ/ހޮ?y9 n\/'+R/&a铳߻zLhٟ ['OkLŕM6ϚAB>7YV۞ӊ#O{twZ}݅v,'˕笍`U lfXy%HSR\۞5S4-%7 Pn Xnw9;[TOR}4tq,PenU.UVkIJS$BU ..rҮkLTֻl=Q*QqH92_Y]Y@',wZ]{T'aS> vE;#T-4/t;/lTAU=_ Ր3䤜=Yty(Z[>9:MJ#!)9Md=APM02䜖?rJyh ڐ0m3j4#ObyٗࣇxqyNO5>Q?oKwȳZ)΍R5q 1cu{Թԩ?-;x#Re8~+C_noߡp "#je6XS.C>G&rYJd.s>w*h㗍¢;'K[ICR%ORH335kF`dU&ʜ/.dqnKf`NI(巄^DɍRIk *ck]Dt҆}JF~R.zVZQ5>ٵ:FbH"dȁ(ѐIȢ(lIG>=h|jlׇQ,]cWh+kDk^#nx0\m AJ$!  + ;Id1+gh7Zޒ ۪YΘ6@1*r \LH!Z@6FI7b)WY#V#gF8G|z]tQ [{jOIFwZF{P=1Շ(T^5'7cΣ̈ F5AJ2*3ψ2ɏ;L>κ)w ;Ty %C?П.X<(ɩ( S\4l5*e_6xh-.y,RqҸYNUYJ${"ʉ*@Er2F3^ƎI ie91m} )U `R85S`ȄbsT!poz;"`,=;}ٱmv ݁qN[4#Za3*AC&5dȼEMVxN,OלpړmUABV:mv=f[um"}QYdO\yri9$ $) ȤCC+sx\c*Ĉ:ey,(hr I=Pҁˌl1%d܅Fn'҉I㽣֛4ZYzfr5 r &gO4ӿZu%Оiug $C SWZbh%^x[{5}6drZm×WOUq1t4Y -X0a‘dP`Uf(l->~G#e#e7J3o3#1̽6 D4"Ê[cl 0ɡQ&~lLΝrwSgD=w潨,ذfwtw^k"6TW]^OȠ!WE+j]Ͽ歷wO9tf; լ;\hڿmU׍ww\JPnot{$rϦ7EjoY|֬,Ys1E}k6)x|qW{l@dk%MjJdH*˼sp؂ncgTa2dW* <־cF꜑;dNIcZͳ!iLe. )#WyPL;$K422Vcs 0Uی{LUaMs]ybz 0Vn!tWk]Ð^~p>~>yyq;8p)b:J: wݨPX}R-95|}jh"O M㛿瘋4ScT!X./̂_k=ߪ/۝ :5czUex=2F9DVJ؊9GxYI d#||6K,kjZ8؀g2@+壶rF8:ӥP1V "402X9$ɑ $%dRucP%`تz]"~:za!{f(42B$7-iB21Q<< ,A9W0;*ĞK۟jk7K^9z6h ]v!]|2 ZkH&Բ 碉 4X24>#AJd9}d}%DzEHUSSs}V T܅b~}R^+DqL|ŏߚo.)6JJnQ7ߚj-uJX q|1Zѐ94\iϴ}k5 7Դ̛>Oޭ5hzI̙s8Σ {+DdeQw~dlHzH(׏ti<Ltd4AQ&NQlTO>l8?gbǿ:?/~x?ӧ?|`>w>v \@&IǃP!ૡ|놡:&Irߪ^vB)eQ+kU>e0m QNwz.6-m-bA=W;zkEt EY6T/zR9tPH_R2:F#UĎWUH !>&GM[Y!9#7sٖrkiN tz{RT\hCEŢCax9nrm%{ok>ZXOt,l&HUJc0&H+KHly.Ft*8kG]W /G1BU:x h%! pxMݭVb{7E7SDOf-_m*aLo\u+Yg0ZCm,r%V+3^|=بzzi-&>B^ԋFDɓ0- o Ȝ =iG¨mS)g/ѩR\}ɾ AgI&d;&vLF-T訃gQBʽԷ& 4zZl}G]D m naY;^jCnϦ9t7F Bl"'TfLNѻLjIYE>"cQ]KN_.8LMڡgv9.JwevbsCRyr<>Me[fFIh6U-5JݽSlG7^54 R Դb;:6q?mz,n9YE#e@ Xwqgi BDǗŗ-<<9޾|@6Q $f\BiBAR {ɰvS#88[lF5[:k(nII|I|.Y\, ƙ $%I Qc4JW80䓪SR=V}ym˸á]oe(ЮC}Ю hv-^"v4ٶ"% 1WS!2Yd\]Ms9+ HCa&6cw2 |$liCJv7Q$%J?L"ejU*fU!/hiz#^{WUTu]X)ғ}Y dla"eж-9@`Js")H4Ew$I0@/< dё S>[`ha7K1kfs6([3+<'>sZT֭ry}n6˟ޗfm#jU8srD舙GCGY4M7}%BP ?DPQ1'$?~xЛ!:-D dI9g-  xI: اK^Hkt!ڷ[1=ft[-]uyy(]|zKӋk>ΙxN4.'tqq!BȦWx7^}nX{3Rz}[QS7֯|`uAzC9 2+wS/`f=>;fvEvH-|loCA=B/]~m> ? 0 p]v&@vPz CT(3$"랲kK?~/dhz͖I;aJ>l Rj(TF\ H I@Ycbeݾ0I$􈘊Y'ti̜ݫ_9%w|-mea+})D ^O0WWއgezyj:G `0Xvm(C$A$RIQٵ=Ks[#DТ ,RCyBc@(DjЮ֌u0EΎ#rvN ւVcOfC~'=ԋ?||Ϊu "CՁXfCRx v*6_"PW! y_"X0),RL?c.I̜(xvuu]6x=b/8tI^љ剘%1go:ac1+t ^Yo>Vc;0j 2_^Ɵ9 6J߮w~3ϧYwc`VG/ۜ77||/\,B͢sEw>v[ nf2xY;.^=4tz?jIecbW`h`aۃW҂$Kfإb׉5OeKŮ=f_!r!: x2EP#f̶R{Emz:V~`>)鬳g`<g_ΈOĈQj1L'LFɢ\_C:`D ^j3D!uu:w_NW?%"QkSϷ8snߩ AQ (QzAS3U ٻ"u2D뢲\iWxEn=)%m?O-”BZ10`+s29H K2e?t~y%ҲSLA%"ShDԺ9]NBiռOMh&al*adDhz=rS(Yj)<&x9xɝFyjhoK}nhWF%BHCB⡭IY.>`UN(/x o-s&$h8P„ʁ2Qc4jF̜ݑq;J9I,l62q,4=]fV En[aVܛ/;{z8{qN;.z{v?۟xA>s3W{_vw'Rk+}I$oJ|zvda[G?E\>'~`U+v?`5Wht{='LuS[T0wx+F9^NXɄf7|>,ğ c 1rt2[Da1A :ϠS|w;ucщ6 OdrP{倯@2NmK^iirq!Q| EbGwݲLYgՍʌ )+ٺP+sU̫ԇe`jqx:Zje;ͣ,mZ]o-زn[^z@t4`lQ9CwA E$Y-Vb[&lLڰ@$Y7A dIkCxǨ0f%;ȭF{xHZR"@ˡЫQ ]$^3٨-)$xF1ʻl򗯍P`XMuj5bH0&%$)A\HY [_B(^Qu,]t`0By H4I@'!,:ſ} RAeR=G6S=sHm-]ܵLW7<Hŧ4Q秋ysM~n6}74ݠ#Tmc ٷv+7+>s?M%pr;E[FmJ;:J#l02TPr:>hߣ^. l_vRnv#>_qɶC0].Se*Le2]Tv.Se*~Xe*F2]Tv.Se*Le2]TvSqs#>c2]Tv.Se*Lex ^]Tv.Se*Le+2]{Jz]Tv.S}Tva9Q]mw.SILe2R=R1suuߺ[GY]uݷ<`S^ x5כ"wJ% (n,J~o'{ʍj $X3zR3•ٓjO[R-I=Ȗ"֠&( L)""@Gђ gm4Ȯ)YHrPRDA]mˊ%Z$G rK颁:-W [;٢sʢM(x~,l܆ '$OQ~NɌq|BuR%;RTA=UGVx'\J RNQzA=jfw5OtX4D5='t!P.jcmQBCr&*Z涚dAz!y D^N5wmIFX!OߤE𺱟53gO?wf@zms%SDL'Xx&::g, eLI4Y٢x(Ze/q5"d攵1I*G"@7RdP3XۡV3''u _\]Kx^*ʆ 6RZ׉# wmIgqԏs6Aga r~!9D6%ڞdNsW϶EaQpQ2@^qQuͯTu2ȺM2.$7w!vmkauR.مP.y(9cYCh)f@}="H||jX*|-;v>Ƒa%M&t0R]hIRQEp 8yJ1ovQ\^<ʉ Zt]-:AuXuw!U3dJ×_-LHs^(7Ns^ s粽ꟅP|]ieJ*%H- (;h'g?笑|t/j=D"F*`=Wې& c4$VN|-q5O oc׊ϏLـ?xF{B' Bh2rGQ#2ƥvV޻/ѭB}'{~.@_kecjd^z8ooj{9ZS\kzV}RAUΥIy׃~pa8X3-͇z8d}չb NFC ;儝\s48utz $L%7ZOHt6#yt\K.'؊p6gMi,sP2ĆWS;oWQ"͈浽YkqF ~}{⣊f8lTG)L71;}8=Q{ϴ=hg#q^y\n>X|.0[CV\߇4_kq ƓOYS@o8~tͤg~ˮiX4tNsYޣІ84 W0b'r9نc) 7oo6~xu'TGg?oξe={;q+0>i"AO¯{ !೩썅5}y4q~X]`q& ='n RwV_ܢkA- . | 6>#c޿<נ\/CB41Jt&]>r$n$>-A$0z@ΐ')mGՊT>L!¦^qGD.-ImXj#,Y=w/sk6#PЎTKMݬa}PHPWKhhUЫqU"K04/Ml^yGU );,m$BDE*U;xVozW;w[EA`V $.I45!NMF%9{8@lYdỲ9=%ĩv!Zkp`aVHu mm"0B.F#-hO'41,ḁ|n=\fq]%W9rh8I:nkl'7."wH%ǼIhq02ATZ{aDʆ1iV8&5A4Bks+oG1X2 흓R5?%)E" .TS`t+py/U8@D hAG9,YֳŠ9kѶfF$/[FwͷM^lxu󌯫͝CrC@2E@̡[ Cw]%%"IpR%-+2g2]^t^|uGQP0bGT/k%(O\SM8V;Xyh ׭e?p~O>S/9xV‶hދȼ^%a:6wDvDe^#Az/&GY]v4B}h!%CCۈz(*m vq`^'Y/PYwu'c} `#"*:Q^lB$DU}d Dсws lbjAiijeo3(#Jg2,~w4mTAwyoX$,6ȻD(ZlwFo;tWB֣0mر@Q{uۖ8]1-躹E#{a%XLɼS ށ#Ǘ-4WGU!U7v(_BjWma\x\P\\/TE55x=, O~P :m+!g@WVDTu@D墶 @8e}ךG)ݩRk:}'+_cBGBD%|JPZFE<`g!h6.WEwV{ZS b mZ hK3&H@h!KX]atWGd=PHGdޖiQYwnscitaN0e8us #4%99:m5~T]qC@A<|PHr*RJpJL09u@I gǬғk;kRb`#șQFGυV()5; D锨ăCx y(W|F28X5F˻8=yq8p\3?bN: }݇W߾?'L\x';]/,GduIPydսem+ڊZVv=b A=<#lx5VWv JXFVLk^%rֱ$'L~~~g~)KmtMA"<E$c1٠rG'>CB$T2΅He$2A3⩣FksNLIEx*,FvC-tL;LH|qpx?{!M>/}f1$Eϥ ,oЃσqUk M@% 6U.(7'àuydGimqZ#Di$$qg e)I)}K ht"Ii5cFgqvLq#r 3mCeu ݏhPb)1塞9;s K崈$QVw)Trd'Z(1\>Z.0WTs䴳U"Ҙlr:᭶&H]Ҟ-RʘO'#gY_O'Y:eu틕6EcTn+z,K*,EuY^̳bfR1@'Ӑ@." /j~]ƿjy3~ b|z/p4G.ϸ~Ԃ8,~T;8Ch(C)˳ \!4rlYo1^nMW^UAjm]Dэxm.޷Ƕc"<^v膅}׃ۏoi"փW~i /jyuІ9;t\5N0xVU >FDgE( B nWySQ DX2u g]/|)Z;l0:DL$NqD&-fSB$h1>Z$xhD}E}=3Xmࣷ_"֠Xlb^>mU6ǎE-?f3(QgiD#h*C!J}{y)Թ-8^o"P4͹u?1xkv΍2kr *Fi&ruΠ)K$)P5rL>+*qm sQ*$SQ.Wr>m$)EZjXV% 2˓I6($ ^DFtTP$H߂BS(J'Sh_9E4ONBNsվ)Z][#ia4Mo&.UzmNQF5r7wrkNM{z>>ʏ8/qAR"U1둵s!PdlRk"‹jFU !9?aq!qPD*$X1dC\QZ3#gf܌R^D]u, E f} {6ϾȎx&fٔMIPhi4g_NIJ ꘉm-ccBŤDш x4]| y\lwq&Kpb&hF&J˔"gvqc\.jmUXk^kv&xC 8eP<(gTRDW/],\T;-P@dHhϣB]"q#"̧[;Cdрq`7 r@zMM'ѭ~}- MguEKȳ*TUZsR`4;^eѾ_0m6u|%cY&ycb4o'q _s|x|17Z4\ql5.zZK%$ve:V6Fw8z|eP-՞wo.\ZT"5jl"x+ }(%z]I=MJݳ00%s*ó+;ҞZ*%Bf[@U?~:+^F<}P\ւs\dȚk0! AP3h0Y:zD(@^-\" 8|>@]/;y2Y=}v5]0ƼܾVKxu|Gz.)A{ RWTr;Ub7-~fq }ޛmoͧ/=O>\7k&ېv#8ZOV_58CQvK:m@! +") -+T t6ePȊԱ6pW'6ה>ቁOT5@.Z\+QJ34 %E@zLG7Ebِ0` |cN7<7tw\BCntTeG}\24 EZ@9d B58,GɳNuHE\D .Y@[ `_,H9Eţ;=ctү:xp24Eδg6 ?U~$ gc3xsSpND| t ,,DRhbj3F' do|YB&ʭv2]j,iab^[ QRQJY [cD%r\M9I; _RlWb45Y`c}1LF1$ai*Q+RPiiF.Dv4=0簵zuMl.nY i9:_ܝXſPB4t 9vD465{ػMyD|fX:ղ9=o aD( IV 8h "GT:mzŶ%@tXJtrPLvtHӎR֧8CN$Puy0 #S&&!#TDRX+53 e֒$@b U.9:pRq\n>R_3&okwb}lD_&@Vx͜T2zcg]PF"XNHXgGx&_=g@+qd[.;ݒ5:R P~7zVƽz;?~}Pc' }(Ճ]s=y.kPbLFSG4FhjРV [!qӣ򩾝lBӟMU$nҤ`eT Ld֘MU1gmxF-1 pgJeX'If}zcD}hg7 ڧ< Ka-}{`d;_>Q+g\I ^Ţϖ##:iS*g1E;. ۙTPG<%OtņP9"TɄlP8vG Ba:*v %Ss(^z Ɋ%a- eN,fO!ީ3q֮~f\ݢ5H'9D5b'-Ij&HEfb IT84/E@N?w¢SpoRLJ(fωR:.VgT+^ ~'}kl1[ R>0 4CJN:q K(TlhG:ӉY;;4:̀#ig}RjZh,Ak dT5{ fK(\*Ҹa-nK/zLE5>ٱ-7e;S'+N9(I(ւX/فJDϾU: ѝyتu3 9:c뀯qnV'6W&MdnD ;Y$ &lBH`a_e]#N,ϣq2M UE\s1/M\Wg+(|q:?N QqZO?|wNVe>ףtb]eh%3Bd.#hkgL}OUћ,|+*KSf ٷ=fJ6ϙ걼ZV`4婣b>oZ~1wzOWwo,_M5nz=k͏M}Ѩ0썸)qu4k#Ϭ+Qoтſnq҇f\#KW;' 5DdQ5 5V[4*rwxh'g~]t33=ˬg~(V«38~-9Dq9r#_<54кaW7pPEW~h&?`53 %4`U8 $QP$~RX-)Ŏh q;"Hrdbo2%<WϮ[qU}d G4xw+ 7]и a<<:,i!sb4BCxQD;i{=ogW?NpKm{qow)%19Ǵ}R*}`ivw^NDTk{]s7W;1̂Sy0'vl+Fj8t^@|xo1'8| ud[uwK]͈fw6sYӢM85D#XF|'mN{:'Xwy'Z6V'%$k4}6 V\{1KARM<{\1_߾w߾}ȅ=|_o߿vL1ҦEZ_"@CSG?F>̪T?vLs7^B9i" *!ʉNÊWt } |@%As1cWt]ho7uvij Ċ?Ad#AsޕIOloJ>YGץ%eO$ʕ$dADI.HQfaGy420TPVb}+P:픤6e^@ҹHY,S(ɶsN Xt\#!`FMD7PC W'pUv޵FrE-Yǡw7ʆf_wVh{ph+ȵLV\t;4!v CخCn;腤#{:鈶Q6ڏ*@A(}G}IFNDꠂP"DIIkHI wQz's'cܛuu^Ke?, 5,Fbhm&;2 iMdJ0J:g̩SD{G5 .+%ve'AU᠆O8t_3?ў1UigoB߇OO_ݒm{ܑ\ׄƮn ;W2Jub{ҵgxݐ+A_d_X,}~ֲ"lخJxK;QD9.drDyR*AC \g]丧 PTr(4bO-}r6\eHH!s(Hd}Zb2䜖pJq!Q:eB1@ \g@EJ gY`[]]©7r6BzحX1N;Vx{*mqE%bV-|D1An.bWS+qeLNWzPu`u[(ɽ;NRgt6M릁wUmej3DX2[f12Cp$#L2'sNz.)mYwgd$ f+ydg f*g$zH)M97^x]D5<&K~J'J!״(寄^D#֒bT 0v+鋜%obI_lTdjC&t#@u+eONsSh}E!ҢDCXE-PبIpĒ'JkFx(%x}Ǫh{FDC"`&IB8$&Ǭ]0RjZ{KxA-"1mtcT ڔ@>LH!Z@6\2 XB.zFy?#Mu \ԳŵI޸d%\TpqfUax (" Jˑ>,&"I@",7{U=:p_LxKc}l]$%v/1S Φ Nh\p\P B|53/:fs囑ϝoF>[Ǡ9dI˜b(ôXu&9Ic0Xǔ=j 3%& }Q&]8l\uO g/bMp'-{>.P5R/%'a=mA !ځ!D_;sщN z D~{P i&Y(LsrIU lfX"%Hhɗ3 eA",eFT̒ېLI:jA9ImY3s=Kdt8XG/6I@\gtw1|z3[ 6}|>?c 6*dc̥*6n2f3!_I^WpI띃 itk*[t*z]CQ،(BR1Y" Sx4k[vy߭SnbYfim+5eye0[FbSXE7lPx|ML|;?yi(O4㋵&~rѢK ;z_~n=jS;^ʶYk )2KJ0҆Ьf#FB~FYzuЪlUAUNm.`#]knu۞gny1,̣eKleۏ_7gUٕ_y_Mzfz>NZ=7?PP}J YkqW7VT?Ja̙+Ǹu[WDbkઐ j[P٦UrW_%\i'[WD05pUȕ|[P zPB24S[W` [W\WZ WJ; 4EpUvnkn \jۮ ++A]\9%~5czVzsh9Vh{8̎D-F<`F_�F&^{X|`350][ӅZ6 /S/\NIˬj)>/ξ#5 K T{,Qy"S5bgQh2ɢu<NNWkuưg )[si$KE%÷b =T3oJ6K( N(\u!U!؉+"W򭁫BԛWJ; ҷle2Irq4:~׃sR2vT/ZZˤ ]O=]4I:XV%I5UBg֣,gzʑ_}86/E`1@Njk"K^KvoHeGGeʖ dUdZ)cEF!C {QEqLX#{[%+ϲIx;eqr.YRG`\@\Bzg1fHN8(2r}="DUo7 裹+|- Ypg%MFLȐec7ɀ R,l#Uţx܂昮!J,Gt^OܨjҴMe?h@T[$6㓏,`T,a tuzJ(#D "6FШ2 2(;hjx"Nޮpv>[W !w$i %eEiC5ծǏT*]FK3<U琵BM y(3\  e w)cQz)U1 U)?&W3a'g^6G9 "!sPZR0V;!|91ܪr+*;#" mj$Wx>hzZ"zP%omR N%sᄁ`A2ZP!c$"9*@ caGAce(bܳ4N0 8F֝"2I m!gKdH#r̒1,GCEؿ$ \[N!bY3gM$H r3A!`mb/Hi'AhPp03Y@K%#/()+f<zߧPjg'`}V\TޅOūQxuzB7_ֱWκhyJzOojzR!,xSҼN0o܍#OigV;gMIڤSz8"Q > 5-gp‘ BvQ8d'gdI.ÉNӅ}=H^mϖQ䀸* %0^`Zr%[Tlɍ_i݆jAi&q?ܮ&.(/RupZ%Ҷh_-kЛ7ՏiirIة ۩Rˆ7qs>;5O3h=7/.Fu:s|U`絉Zb.ܗ8gnmWDduQw~;J?c]#@k=̼,aV= mLIh*]|2}].nparuONͣ.'5j\QbrPZHXyr5$8'|0_J?N5? ǃ0<'v??~_۟?wv? RMᤋt~})|3_1_7νvhߪ\y;NÄ>&HwgIFH)e!&ﬗ"$C)aa`g**P}+d:Fs{nnÞ;йY,֕.G V.4dGY} R?ԩrsTx;h*=tZZ_L i_QB0WS'ȕ ^KB^'PIW[NntvjW,+.1ِ7<4ZEǂ6FWk/Cpcv &;X@(݋*ʥ WiQL EL3mJ5Yb.2)bdÊ$\^8A/@xYjRJ )K>&G֥&j@$2qLef+Qu]hh#ݔ=P]61vrlٳ{iXNO5Zng}mӲ[Y2ӦÛlGptKS\7l)f ջM"X9،K:YL AYURICÃq1B[ѩ^#D#^>lbq K qP9;.9Ca W$Jq ` U*dj@DF24- s0xp*M?Cv]!5gwT(YX(+:gr5;T=4Q C""%H8#yE83Eyh]?[=U:qšK}tyJ˘0 &vLE#uVM4DʽWL=o>]G P6j[vC@jXDVK5xEE();w1IMF !f ˤդSHtd3z>J KXM?8#}\Áu=&Gi9L py5YzThx`ʽ~8^Mw,Тv h 7 N. Zŀm杺`p"j_/+Jt3o; A*BYr)\g)- hȰvȋ j4=NGvisJν ?9DN %Y>K8@R\cRm: 2^W80:ƞ 5:VƅG8T ~m~O~B駂®ϵ|)?h Rg\!K@nClj|Bx P%#$x휏ͿW>휎. 3cEPҾe,D\  dX"X2O%ӧ*KTE<0hM;O hǃ6jF hUK82Fsu7mgxT>sTj9-Ci}nv?_j }vm{tBH`舶dਕGͼ&@ ls? (Iz8A%6^yd Iأ'$G.N'Z]ҡk@44e1BthLp'mHs qhdy4|FioipWLQlORB|>Oڕ=3 1_"#?ƟƓTXSƮi˖{Db{3c$;̿2:[-K欲-mP <#lx9+6m܈%]50 . XDـc΋eu`l25&xi!R1>w#sKŤ1)Pn 1[B*Ņi],+m3p!k s! F&sjl~=Bܱ=ڝS8EǶ~^7LIV\pN"=c#JPNcUjl8;d}j8æh vEoYeY. -8/\;7*ب`=:ҿ 4{h,Fp %N9z<({ ^֣%S':oQ:o$DeBx8$ƙJ6Z%/ni|pm6傞熟ӛU?~[9=|!)UzDOǍ Нisl5Btkun,#D %S3$Dž:[a>Թ'-Q'(\<:$ߒ*AbR+)cQhñnURKI,GnS18-!Ef1Z̑읕$2FΎgU[+l.~~w|=)54S9'2ѹ0#ÐǠ3̹ql&ӌ $"^+՞klCR%x)&''EIHJ"U0 /k]PUDU9|1c2d2HlgW_/'.U6OR|~8rkt}ZjnR1%Fsh ^@-Agӈ"[Äe_WՌM1{?h%zRl.EIQ'\+µ5c5rkzX.B]Y^muAuen_a~ 7M>ws63{aS Je)#I YULڈukKh`{!EQRkFh2i6 !e >Ev'bb֮jmSYk^kvͦ) WtA`d4ޚlT]P% c>UJ<*/m#GEǻ/E 82 O/EJv2a[/Kȴ-M:@ *v>Ebf(1d*R,G(>b.&sɅ~%[hbVq("bcDDܹ,J3RxAYxwZڀHld@K_rwJřcB KV:MGҊ{epR&jُΈ_ud\ `\g3/9k!pq]O2TG" ^T=%_ʈH2>0%pԺt쀋K\<08CClÔuՌf.'|?YkZ0+ݍ p` ͖oEJ}?뇿[C73/^/£-)>*u+)S>|vH#C^?V|FGhwӮףO'iG^HZ}EΎL˫d?mҮfhTG|n_7W5`\K$4'mQi@ XoZ^Vҗ|L .Vo=ӻjV*4v7MM`nMW$fW5Zl_huMcur[> Hb+ 8Ee=d@ic<-)??ASZ\j=2$ZJ|]ʔpj+]JH 0s)wtWQmp^;˔(QqF` )+ٺS+s5WË<]o1Q-ˢZrFm3K+vS[K9$Gh:)*gU$\Fh}р$eATCl 6ls2ID `IkC{YaK"mf=R CO9-CW ]$^sgQELŀ*Sd_|[A6*X4&8Ib IJGl*WcC"R֗WTr.Dov~;F7Sl Z[JC6!MݪFZaq%db }،RMsozO~V14L>tQhenLҸEOgI?- _Iٮp*!6s4&Id A#JČ^x2Ȣ#9P[g+0 ٛ(]&> x86["JRd֓_Ϳ&I)MƹH~}:蒯'L!8ǟrddS_wM;HWQ_SI5-X]yE*7VkZӵCmŋړT1J)9(u5Rq|G٫:ZpNk;v>UPrRnQu#?u}5,k ҕ9NtmzeTʞK5تq:q*2.Z4.R0ȸ|2.n'W`s:S\+O>G WJt\}p夓N`%dચ+UıUҨAB% l@$\UshzVk=vV]}pVʜ\U?溓aWZpVjaJ;!bW+POד?h%͋pqod dCl]Q`3 u|vn5E{:]h.9.gi@^W7qhaNEy\}&`1btփ yJI)ɫTԠO챔#qhx{[t 6ᒭ֡)^XDqXj.sgf9D_\:D!O*a`m.I !!Z46!i(bLty_@DQzDLEr ]Zrnf~Bc5a]]yj}68 h7 m1Pgh\X5 㾺bw}|y -ҴA `:yMDB84a`bUvqu\G:=`Dm"(jClKR2BJ"+$jz[ hhcAotM)ā &ʜL)R.+Lݶ02p^ oOd j[d IF#rM L;j3swTz7+N2-eEϭ w)[<ܒwge%_)@|2Zyv5xG{䨼:mQɢ$`/^ ص2i09KC QhѕKgRrQ@K(~%]`Tj̹GUf\6C6B3`bڢڶ˫ʌ_m=ϯi! +>1bSJ9PTy X҂7igl[F.{!+c; ںd!ZN #@_xoK"Kُq:!池v38]cvj vgfA$b7t&/A;tvѻrDq\! 1'b ) 3C E'M&HVd82Fs1H.n̜x֊ML?EDl8  ;%BsvYiC /(NpP^Zc[m+̣:ؘ hu;Rq&ȒNđ^bj]9񫎌nٹiq8.'b#ff dPF@)ҀǂͼP<-p}?ak5_҇~;xড়aoX$7~|ݏ$+V/eUq~qY`υf'LmuA.:QgIDT8h'HQh@2 ClMRؘwmI_!eo]-8:M[CŘ"%q(J#y ز8_uC"ZS#e.4(^8 pI( E'˞16.$=s*5BUhX%OF0)(-ziqG Ap(~PUhBsIJ SHS#L#~D\ dm2,fQT[g.*-W%c@dM2&TJ(*[#IBe~vIB EtL^:GdE26>;vU'*bf}[mgbg٨=z%+I=\[IaJsJRRA*IT]%',:zĭ͓0v$Ty)㏿$izͰъz;8 }6Ĵ:-xݗq^UIZ@'0DB-ME-;c"x'PHGu޹XGL:5n&ŝ2\ ȗ>_ao2<[6Nr͒/Ïax?9ltc=?[U+ee MX5WK)CEî5f#7J\$gšǗ_hM&3D#廟/|L6=w(yd&b_ oz3]2L)$uDū|ូ.\J%#YI^V\0ZUJ_ ~Ur\7]߱pbbvp뭿 㾟=9c}#}K+Vf[Q~mQk R}2iV+X=ͩsPGȏEDtܺ􅐞㤪P$iJ%X_eiV- [QTg󜀶  F$1K8 Hv`rf9j`Wbd kMY'>lײ|xRmt]»w\%[IXƳ6l:&'H)v.?.fw9SAmͲClY`yNݻ6wz^6*fWZnhwޅw*&UtlD~z-Q ֜ѧ5S_9hpTd~QiK˭;>5Tؐm^U/rkm'W>4`! Fu,[慊sV{2uˍt:#]H׭0#gh7ME$Q 8$E+@K4Rl0,3B.-mH]O!c\k\V<eUF2sr 4EFzd?zVg]}l+g(ϳ}_ax&`SbtxHx0Nixq7b ]:  [S` < ]Q~DxBȪ,z n*9g)j|BJ ; q,c.55RX& <8< O.9Tk8XVi-y8뤁 +!K;D- BT\f @q@w+{I$5"YW~奋B,8\rqBd%4!tq@$9`6.h$ *IH@]XIRRzh\`iL$φ*u_$idRnJҸ-oF,Q9?S̫kz޿gM9 )!)N'zHb\e>Y.p9wK; c:ʭ |%v"[?n0;D2$$Wr'SErN48N& LNճOP$Z\qzB q#(/8o}Q?\diYoȆreKږJlQ.oU:;[:EU0(n DJr%'.q(ó0jީjݛ5GkyTe*0@B/ip9[nsrHL+sd n$]?RF4U0~cjYew(!G~̳wpe>ѓt1RrluuֽҨ%NEFHX"u4IR/SzB{Mǿ,dM yYgq/o~}_ͷ.(9x 8iNHO''Bv}m}>P*{fOfGW9GpyoA|n0 Rw5ӟ|E݄hԄ,'a-hr-*6\" r͸=AE_R{/b& =R <ם oj2@`_lR3㋩c¸skg>D,hBP0zA: 7Bө3}0.E)>hMy蜅Te"k6DLj7Tj#sI Tth&nְRKI\?}k}zO4ǛokgH,bG{~9\*Qی {&*4wD6Pl ocؤ*u~ٱ0ԼN`pd"_|,7'JA R$& vԈZsѰv"R#@Spb#8qҲ{M5:k+P<펌ֻD@sȳg2J',ePHևHQz}P*B1xdԪ|Re,.lz2^t\NJҊE,67ʤ9YgYT (bjCu)ȴ^GD$1macC)ntI;x >AؔTЎKeʃp.l.3ey(҄% !HgI"҇ȉ5a9k}IUmaؒ4ăwT2 D i;I_D'@e`*&EKJQf4\W.LG⽍%4?1b I*AƠ:;nemlG*gz8_! fZsf I0`0@QSbTHʎS͋HY))QRplsSߧXK/H*ʄ h6)Qx8b BHĚFqT jF䴷!b)KƌBJ,5d*_  )$ *4evXU =9LDU>ٱ-Aw:,/^>'~()JNk/ArJXSsݩ2khGgvC"dthlNv ȠQr'L$>c&R١hY(yc|y_~صPH-Zv\u/c]GT|)_W' 8d0Xmt뛖~6? ? G:X6J`hS8p7ַʽAx7G@1吟'u0ݜXƇK^g!z@nw߇zOUve2&~J{q1Ŀq7c/,SնmoJL_*S?,>_4<v~זԕ`-$CaiM,7.k z!@/,~aHE١ W].ߕ?hʉnq-+6"\b7o* jlr+uޯb:NY|+XGubXZʣڞ+n'?*>&zxz[Y9g:VzHjqvtP4q\(tɅ&4Y>vWZJ:σto=k#Ip)Hl!+LRɻH X ;ݞ'CLcv^x2,*Wg#\G*dT.v=pv`a}w5Vz#P1ƷpGOUI?bR=Uh;nrYZzUcJG:v:JkzWף+z>z֎ی{:y0,e;753zMݗPr" $BAƥ9dlv֜~I}p*Nqs^Mۂhe|0 G/ďUbQ>!=Ѩ͵=lÂԾT+m֐ 7ovY77e aC6ZF>h c~y1].'af`76.o7mǙ]\ߝwaClVZ-&ΞuKu 懫+~|[lH,T):.=_7lu% ;ı_xqf"p}cW&h:Bދ}H,bm)FKJKNE Оdm(NyL/ߦ-oQXH-^1^_bķ}*X:YIl4ub@6~_)[1h JscYU6~QO.j]w r_֘ J FrJY/ % !HB4W}CoſoE uz;řxZg?3ۃ/m>?|Zy{qu_jI;o@jYDuA*aXN\w5S#/*qogy,2j3EB9Q;YWSXLȤ 焋NzQB)%"I9s~#dJ ױf;졳w57/Ap:|9:$ϙhnrdjwwoK</gnv˭^x[.=7q[}o ΡMF@!eavnr? np]!ng szۻ;__+W--=dݠmwg9hxpS+o}R4tWX${`q8~x>|Ϙonj!g5s+7OӳFm36O۵I;Hҝ"ɢ%-5Dh!J*r*$@I=$]n(IG(!l=#K,EjbQV&#l@DpH)SsBI/A,KN%ɘN>IG|P瘄3Z JIzgX.SnKЧo}n]|}wk8C4T^aja;L]`fAL>M KѼox]87njn=BYصypDAĮ8P(ғLsI}ʔ9#m ΅'%Q|(EbE*C6z>)KQ2sBCU渐ǝ g{ܙί&Yb`H`]0M j4.*gt[* l4>HQQEC6G&&< ke J hyX8Yګt%01[/YEƮΎ_ wj3l rI@dX[RH:GʰES8䇯H(,LuXI$.&JHRZ9ek+҂G_B(^QU,m^oUz F3[NclA r&9&0~oepdm-<_&^9 ٍ?熜֜gOG^FNpJB*@XX;~P@;ـ<P:Z-ҨJkd<4m~^u ѠE4 4>T|PrRO^ -v^}R'n>G#,G\u}bDV^vTl? ΐ\h/fiCbށYQRéK%̕Aڊ_<]PElW%TuA* ] d) (8 0I]7\/5qlZ-\~8;eF .B%zxd9'!\! 3UY]nk,W=R~kZ9zߦ{rlߓ~:wsKopixcjc"z~AaÆ8r"඲OOZMx6+zRU4ÄGEa 7Wq1Pc|ՍG/d& r׃{,7É! ,umyߋYl)Hu# 9tkZ>XglGe0;OL֛n:߸YRrMrG5&o24XBnBT fc;,x5WqX\; VItqD.1[iԎUS% )L3u{F6ZDI)PJ4G+ej'v5st::97Y9CAGth+l/oQu:,_ (k*DV͹@ʦ,5QzAkngh< ]1$(cyPa(dALrLV'%U *٦dA&鰂[ 3d 1%U4G8"}c=l8;YW5[05!KrLj%St.ev,!k]aO!Z1+0eJe2EUy( t_ vOA$%RRz%*.T\碍CB4=>OzD=+lk )\@ED&% PG,AAX3ptb8&4|؅p[Z+.;..n "ɬ#]tr E%jmcF!A%2].HA,$QPRBi|_"cSSYx'Ǣ|Ⱦ';V.`d2MD1gNQpX| P D.tգ;XcK*cЎ$d_PkzEЫIu)ofK%E֬ $jݢ!{N:/ч/guA[3ksq^j"ebC3+_nu4g%dRbxBDpqtr~Lfj7GO7,;{*9KD bk QA!Rɡu۾67}J)$,ΣNيkM+U``apppO;ui͂]f0xLrvP7BQKru;a hMʽ?o| h#/G|reJPad%sX|V@% ,6Wg{8W$!86\5~J)!)"gEqHjJLdNO?U]i'@h}2@BAK RB8Q8ŃÇ9=㛸uwu󭵟LgIEe{V'#W´w37Ut}*]ߚt[U*/&+*NSpnjos3CR2D:`TQ^QKd}%רuA )l!LP "$\ug2x }]6f@0! H xIVv"ȼUԃw`Zbv>|a`>TZp]LDղB7}J&XA)BmU wh9f%K8'|\g0*05tQO*Fo~z"0[w| bwv^-7s$*ʙ%|ra!֒lo-]5CڛѺEbyևO0i)>g=iwћA#h}A6V!K[aGRإ/'!94&\ nTMZWԏ 9~ݫ߾~>|7޼D{ח޼N/`x~m]xH4?4ڱWVρjU)}[p7t r\X(9EݛmB$5!0 m\ 6+t!r]w+t3v|p\ߦw2r,=$uޫ@2XZ7R(0"ac'EsI @t: Imqt=/szq55#,ANc.AdF΀+'4t3/u\o6x&\<7g^KOb`x ]Y-RRXh r[*gY)7 ǀ-v^-8~ER4'ݢ8u-FA{N7Qȁ1' gp} I K"/B['νTW{ 7Ɯ~Ϯ;zMg֫J:?F^diD"ʴ#Rk|c;igPGG#$Z>0GG0H>y=:H;k6*"RbH 9VTOWhf޽lf NK x4 yw3Tw@4 ZzO˦O-O#3p:Qk.o]SPQ;[!qsdw! z)cNoM$V\ L82ezJ sؘ̄/ݑ-!}MQ-S0)2y~p)]mf:jQ_W_WE/ΝU*φ?'h;וaXv-IHtm/}̽-_Gyu47QMozKqË] :.iwƾ*cSerKfb k#yX8d>7Bms3S Ud+HK]-U2"o!-p&7$WRTZ\\ 7k]wnoӆDۑ][m*zqKTy %X l3\ozZ@[2/KF4d.|%z]|) L Ȑso &]f#hܻjj).A/ٺLPSj3yC42(ۼMɼ|RTU$o44i*M;y oDULZ.tG\H%Q" M)k$@9 '-JbF>""&f:#L w9l[n2tYhr-HFšQrS{xνpJ!q>tdzY^@ʃrF;:.bpJ03.6f1fkCw.=:ߠv]w $ խp}إTT^8>_R'ɜ൙0p1_?uǽv!ҵ8< Gߟa{,z,La_T=B*0jUݭqT`V'_7` bm}HTr"<[CX{Z|=gsUQP݌zE)O'?/zY=-Fcxgz9Lj{ /q.EW@帔nj,yWGU/Ens9d3Y:zJ-_q],Pc`)G0Hv!]Znu '1bePGB!6t@Kpl=(Di`k.PHƘ8P=tHbaJ[%̸mZL*A1r Ca+.mۖ^_uz]8Moɸmz f%w .9I$hcIyyy5lHi[H8ᏒI"B&üq~y sZD)Z2li%%iC#{>>>^l z0g)cZ"AHH‡+ Ɣ#*epȥiG@BycQ1G EmRȰ, Ƹ#aREbD hm;\ln!xcv{ϻ?}"C} ()o\Á\pAzV0#*)A L`7aL-mgɳ0-?zMŠmЎk{ҫ;[KGJeRER؅Ц/DMhȓ\'V?#s9oSE.b׌a%2z1KԽ ͨyF*j~GFYda"*N E7<|vWJmt`$L@j_O5mݧtӦhC۴!}åڻK{獚Ҝ,HٸΚϧ*V(5<;.+T{p07EB7Zޛp1R>ёDX)4%ΔZXڀHFG{2L<0>,$aS%=Q5$8F!L(J:Y#Ǝ96c̱)huBэ3U3t\NϛLDoXe]=frɲ@1 NHk#wXP9=Y`6:;ɋ??~~|H xAe31!{## G!FN0v^̐ia <* sX$0 $5˽);2slިur  8ݍz͘-G6:5?z4쵱&Z49eޠʍ id{S+o5rĉ 0SJ"8 mvq1w&Z"Y9P2z-?nP>RLGm7FsfR`ܜ1[w6s|X%/FB#/{yeB-IՍ,l{k sVO~]4C7>sD1˱%:8i,!Em p X<6^WFӐ= JۤB:`^e.D]]o\Ǒ+6م<H1Hlk~^A`gsvW:vi驎¹qߒˎǢ6/ڬ Viw'FLgVNK}yri4SoO\ZMU k0-R (Pjq) .K/;\,p-/ϐ`|6~x,"1)"*":-6BnBC.0M.cRfT}6)K^@ێ`Ӣ5&r63FѲ5(iQY؆&qpF_tu.69=\,KƘq1+.*.ޖq$| a WM2ArzI /GF$tMqKsŲxhë{0~:&:xv>ޭkOb|6eƚǯ^*w_N/h]~J!/R`g#B{Wn}+6We W/,dp7p%:h4~:\I^ \^$`{WnrWmWQzpɘO ooJq_J%J^%U`~k~`{OEmRb9zZ6Sn.J:蛃ӓv:g7nek 5>Q{7}O[8?v[` ܻ]mν~|8[9u[K^j1<;-G}Q&[&.@V/㎫{\9y}J//".kO-ky~v[pWF֮W]=u `PUwۙϨEBBz?Yv8e.Ll+7Rљ>?k6Fb=H5B9lLUk9&G"͘VJ,W!w&$36Yg)~tE/qu_/ %Yl{#ʈ;wMG*bKٙ`;DmuףzjMϕ-Ж)h:Zk;92lt&K{ߠ.75%qd͍԰c0}͝"kŕij)p%Ah\ qhhѴC{jW.Izu>$cݤ- &EA e3sƘDjy{tW=3q:-sRmkssz2HyZ9kOᎌH搃{x_mgف"Wg2=6hg9'xY9?㊳Nt`cJ t4FL<[\WjS3~gҘ)9Gd!..K&Grq-rkqE\(kj 0eĘ;C2e3 ʘLdsfb;zL1f8c[bzBw:>9Ӓxw7E(Gδd&TCIa v[8Ui>e. =vd[)%ن6 LᒝO2=ܬ-1+W"KuA,ڲPBAa!'ҫ3u0eŻ`4@.v)D!\2ŀXT`tv[,q9|xiE?v3D.dy4H`::!W)إ)qx4y Rckc@\SE`n (-K{S^bNdBڤHWLU`BCda d@; ƄO `"zߎ#8xy:RFnPHoD8P-l-@.f@ILAt;a]̨A;8bH@g(۵l;6^bE^n<1!8yp>QB,zpB12mȝv՘xaՎ>qzXzX7z|&?Zw &' n3Sja$ࣀ .Á2X dI>ʠt>SEUf&9,oɍ\Ÿ#3cv@MthIbXv܋cw.En~,V͸KmBji&'Xk xhDr(cP.-i=XϠhC*-G.h &: `Ohr3 l`EUۖ(]}+wH ԁ%xJ + -eZ3RA XF?g ̓Cw&Hy(kDyyw6(HiFreCuqvPD!p;{KP.-JRvb +9p~o@3hN 2xuD'QR': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N u'o#pzv{zы0.}cqIדfO+w ǫۏWX ;tKmuH0޼#U(uμ'2A(y?.O];ߗw:(4Zҝ.G|MxF(4&Mz١fS?Y3>˃V"ZAYe\9=ʆ /3SEd)5+iGDAbyLzc2-=affl5f69j֣T̊}x|x}9=ڄ7רJ o{r/FӞմ?]6|2@Y(OXE4.9ͷ] FsF?˗}Wss~+2Y\fh!3#ozQdE5rU>>)_qcRp4!jUxui0WݫG4C{O K?:F-l®Tx3߯=g9ptLoe#YOkxϞL8&jɲ` 1} A"J J"Qݝ}U\,5 4AI#!J MdGwI ҃:pm)?h@1p ;O#?_.rVܒ]x8B~͎ݝmz⫧r>$<ձ4K[{S0r @Fkb. ͑ēQ4ǡVß.FN=8Ȝ[|][ gm\uGR3sUތF >X|euގ7WzuB(c5[ylJy=[3 CwW)=7b!t~x]%ʻEP!zV CJ˗Xehe6RN4|}{IT_zA%͡UHoh`e;jq|_)&w*Y#D W5"6넢yi ^uD*P3mb8b&1%9Dq@4s_^O !vwN ǖkнRfff~b{n]mk״N#/ZU0]SR1!EA,cS nЂ߂7/'L7kݱk_ٳ˟+7+a+!+!V;{uIzN'Bh[NёgaJh\sfG.zy"1UT]UWi8*%>+ VJ \9эۏ4 *hhVw"8SJȅ̹u:$H>gL*.ϋ$DgC=I`7X`Ԭso0*LU4B+9XJSܢNyCLXwa+dܞ/ȹpl,ץu>k"'-6Ƞ/!Bl/%jR 8 tĢ(LVFEtXwe*OO}Uy+esIm <oD7M>\\Iu^_YG[[\3ReR7K 'nǘ42EZעQD-F΂PqNo/\ITmvSfN?L"pßjwt{R,Q:e2}R (G,p T9)g:UC%{!KbJVM]d+}<wHޔZ2,Yzg8=v<L;ڶ׶Nf6ɒT^[e6d6z[l++"$]Se]N&b< ):M5!C BK&#OFtp$ErCZGI7g?H'Aφӷҏc=k]{s[ C2ښ KEpH: s̲؀E{KDjG-pM=ba =b3q{wo^u$8e :fZr_ԯ9o}{KQDgMUqELZ-FpŗTJ+$@bO6ӎchW}?| <ּt7G] |/91 냵Ok '+w\nN>AE9n E%3KKCs?S =Kw>KR&:RP%(aD]:Qh`,xxpn',eR@g%>[4Q(4`NZ5g?t$8WM;o7:V`Xo]9(⣓rMۏc\r'C h Ӝm-e,P#O >Ӥ&kƶ7&ƻ@od ݰ 9 R ~=;OBݎo{~s"~cY4]c: |vpU5ub1b2+Vx *Ӹβ։>Z'<*eLlеgq7Y頲tw;[fd:뜪M ŧo>|7fņivЋYI S8Y] v]G}릯H IblFgw_fwOs43 ~8f7 ezF;yRvn}.<=3{^j|&]WbO5F}lIv̭'K͹yW 1 a^4fWI; yE2),2p<0GܜĤ@F}Aҧ"I9;nψ۳ $V%Ƅ"NbV$mbP(sYM{%HiE"fS9!1qgXk%~DЏ]]/gw\de焩O7aj jv殼xTxU to.I!0=AZœAZ#Yk w3 w,/aJ]&:78_*_p&[8~z|)]*BJ:~[ eȅ-[~E0yA;w$ビ.3萘Xla;kAqrl^9s) UAb"yjVA%m9L )qIc3tl8;۾8pgS6va_yBmnfD֔6ySts}&XEʂ"܁2,Ef9fZ ]aSPJ2Htk+PܶgÇlĎ=zh5Ae,Z*j0I]dP;byI9 焱@Q"kClMLJ֊(Y7A .8=5J-1d"g}檊/)" !V =2pFEZ}ªգeEYծ]@Wk80е>HDHKim jpO_L*T/}vu6Rj7 h7ߎ֒W,zF,lHliR$ɕ`-ǒ%YKEP=?|?:5dY$!p2EˆP>IX`NǴ%Ȩ2UH %\+,`d).sj;'3m1"w%?7vY'뷃K*GUSjb- Qs;A£vd2YY(mqm2Z)P#JJ4BIϹJ/iU'nuLs WMi~|KXZg͟QhJ|y:)~R>MP(JEIY- s)M9YTD9co,wI< &bc, sBʑpUA(d6ރ2Ɂ\A:)"F%rRTx.HQ85-j*Z9ɒdф]qY3qή>~f)]J%uڤNKs)I~M@H٥ĵY,MUJo-MK>O<@u79eu"O)`vk!.X\碍V3fÞ'<s#WWKwiBAir){K:d: hP)eh88LunSĽv)}WqW]p!~|46D(Vk1J&4JJw.CI Zffҋ5ScO46eC2ˊnhD9Eb 6"pIRT)wJ<ڙ :u4ӐcC65.LĚp{#6:u:+t@,`8.Z-ZaD77w_]GB5d^l0lo5~JPHʶnAQD[ivtWWWUWկM|*TFimHőV%]b:-8kujX/MES")2?[û\)%r͕&>\28+u]4Wk5k9Vi$%zZSEjCkLF獣RP#b yH. :0eN6P˦|C6:froX=K8|} ԑ=.bƹBY.J5j !6ݡJ931BFRo qIu`TxGDJNdNF03,*͸J:!AZ(w?ҧ,+E@9?f*3bhQh vAyh>:FI|O9Nx KdhThF ,GpXNJl㻍F"Th}BBH 2T(A[ᇢ9@UY jϒ!y5. g 必c?[~K%}/4qNs3BV9S3^;3{9 D{ɑFa>ʩ&`c۪ YTs"JLpNQxRk5ٍwZge䯇? WŃ//5H䂘<켜[vI'.EK`Ci$z{5~!DRI|-%.Ј!4l${n8yRNކZ`"A;'> ZS=R4Hfm ľr~[0A#nFRK o'`]vȿ^ ԫ`' NpF,BRrXh6Wβ\po+)#voZ? "iz,%nU7iFA{N7QA0'gp}@EJ?19.\JdU:f?F b^*_x+,';n& %N`QEIr\I=3-v8 ex'"/^ͰN0<`)5HZ Jc&A+b*E\wbr cm'KNC x<3vQ -6wSUo_Z]G< "s( vF1 "N7XqE 0N#x&DO$W'y$,l#BRE=h #OTatqZB#F2NiG$uݸaN'_F0I yq;*(6-GK*"wJz,vAf̦/6Mբan8$e.m B4 %} !6ph}}\SK@##G `+49"cOAf•Whg%CX'`.Yb(h ڳR;dDWhGrp09)51,e@f0Yt0T(La>;O..ʬQ_qgߢwYNFU7L ٤_F݇MU-V<==I. o$Z ُiB9/mMF_9,g}7Pt.SJ] Zz ժRnkx=1Y^J3zF }9 >|LQU4tIܵK%#pRx3w兿ONZd'8@`FJa\7 W df0)vo3D۱]M*~qCTEK %Xb֒XnLbs8+0Yb9}it㟯υj2 fWywB&VRڧ)tg*nYn2閡ygwy޴fcۭU=ƃTo lAzL鳒MԬ"Csa$,7X2mx:Ȯg Ȟ NQ(gpML A#O)k$@; ''JbF<8gy#țZBswq6b`YgM)3!kFF.N*"{B\9| zā.oûF7-/8+R8e?Q1eh ~zQ"rlQIDyl!Jc1F1sɭ`)$ŤKA^uRrwhui~N5ՊAAùlFyFER c;q0XirQ[N`!)F: lp6H0 vVk/@kz*۲v9[t2o)صԣ+NJCu4l-KНc=v ;MyZKIEspiq]^W5/?_pQɋN.#LEL)&ӫ, ~;.#\NUݛF. *%ZeQ0,Yq g'O џ+e >(Ju}Hv"<[C7涥kO9ɋisZVGA!*iw^/ 7LVv~\=y7<ꅪfcr:^}8% HAev.ˀtAtq [Wsݛ1o; CmGyQ^R13A`Xˎ9s˭#:F 3`1l3K.*Xj6\}}Kjâ4A0ɵ r B($\ch՞{:FY^1Tk%i}oDSŽx3bCņ,\$Oa͡]R?X~a ]6-}6?' D͆m_hoظM-wl96IP$csԂ2v ^rrEZM۲A YOd .萎;C:;鸃쐎;C:ؿC:;C:;C:;C:;C:;C:'_RH,i咗d4  Vﳣ;g8)S͂Ql:j`r_didju}xw*x\,RNh9knXG IwլP?&+}to~Hn3DVTr'b0Yt,i%V]Vb:[1yukq^NIZڌ"/S\yv;|"lq6i[/r?Z*7<~1_`V+QGcdqZiAj'Uǽ$MJ<탔4\ `D0 _^f'< [_e%{#4}^IHKw^) W6^[kZӶhcr-*2i=v4(-֦4G9Y rBz%E-P1zx'=E!). N ^ l$TόSM2r-T`-/,1pV0kqp:G.+%N*Xʴ#Rk*btIIG-v8 ex'"/Wl N0<`0  `?FH9hT11T +g__- YB2tE(5]SPQ);[!q vd:?i=$mB`@PO1D&A'(!4b$#T&?[)Ӊv[Aw7IW{MTApq/)饳sb[|0}XnW 6xU]|6qJԯXSK@##G `+49"cOAf•Whg%CX vXubb|`a@F4k Ys2xk"+#9@pl{˜ f}02X8aB7mķxδ5BrW)ݶ.`0S⤡K]ʆ._RF +/}r">5W*"7WͰf؄d_&<f c1UA~ԋ~JpJ%Xrݘ`y;p*asf $Um`o`&f#`ӯH U&ݽԡxvG7Mk>V[mZ53I uWUfrAzLaⳒMԬ"Csa$,7X2L= Gi7 NgsJ=Y4>7$9QY;8CKހLўwy)a*X@%HY/r[8IA1<'8N()ya`)KJJ@FH"I`޵5s`ϥ{.:CR:/zh\6=h2OۜGחGJEW9&36^]f*7r?GHz1]z?d9uy7k1޷ymRg^ɯMdy=]}|w9]ANݵt o &/eu+{>ᔸ{z83f٧X|™q9v1ub_ߌ}[ޥa( w{E"]!pdt"F*&Pђ{<* lGωCw7jX؛w :%%HEM2`;JShqdϐ}Ptv4wV}mndLJ4:ѭT"M*ۨn/h:*z尞u'>C/?麟XnV> }89 Ewכ]yȣ޲n>'=]/`06%0:1;ui*[ٽ睇#3=3N֭<С}v:w76 Uu>8"E؏jxOxxg쨲cIO=ѼIJɻB Ul!+LRɻH X5]t CI$dEGF8F^ Rq&{fR!xѡp7 (|;vjn={e lE_?dxo?S{Q㈎΄5GGBa%Q$=ؾ>C1:tS 1 ,ɫTN=X {\`5b٣9oAdHb0^FE  VL3O;sH]xX[[\6 ݋ ;t܅7G}OIMH(Z:,`Fb(gCRxeնB5yIS-x4HCp1|LȊK柶v3H<>vD넳fMٖb /Xj ^1231KN3<]Hͻ :&b8"t#tWnjwҀ$98g9G=gcbz1dgCC2zBEP# AMHHFU<ՈRi!i(N_@`Rjy\**F5UjiǞ{zH pXsXt5t>gl˥o11+\T9u!/QO;w,{ݧ/77"#s:pt^j좐0N2dWB{=u90`7(j56 i%)!h$jBdetm,Q 4N'9=#`-R,C"Qcl8Y ك>*-~o~9),F`)DЅK ʜ0[6/!RlRlz%Ұ;[@IҨc1GkKZ Zi7gX9^N*ɱ1p 17׳mdՅؑm/Z;7XM{Ko?OE@8gICBbVȤslŇsx+V^8[r.O.-J.9r+leRudl6Vi YƱ48“7ݴ^XQ~v"spu5\6#v)VHeT+eу)*%h8%`H @+`.]p@md/d%tJ6@4xSkj8#vM9:ڶqԶcNf@$bGV\W:-d:m9:<ȐP3qpAcCٿ"22,:ik2d) GQ!\08G6N;l6'50ӷc#kLjsYJJ3RNA*6PAoyַIҘ8^0%iDN a,07&-}1*52dRb&+vQl8#ʹh,L~c\e$`5x@pB=UG UrɗBQ$Jc\דK4zء{*{ZQî̏ooRr׾-Qnaٻ5 m@BAWQծkyvV%RfrL>[䅛-ɾ{f9#9Xyzmjcм)^412iIt9A!NyGA^\y ڪ3&]ruq  &2HQhk)5}5@mv[0 u[/iw /w(.~_8 ?O_J<}}@ߕG~ü:`/a^.0QZ36 YwQz+ l-mM0Ӆ<6]ZvуNhAX]S*Rh9lU?Q:hfQ T:Q"$*n ^1ʔ sP#mJH̅7]X+H趾!`>)KQ2seYel- j8T^}\T=LQQ-~ղH:"3T]C[jh|$eAԲ!&&Z+TJd@&dz'{Bo%h^:Cp\2OUYI#PU_NvL&(5{K (^O& :nCW.ku(L"к+!Ii+IDK+ ^(0i}lH>٠3, Q @B<0tC!JvEYmj[ི>c}/hua)>^>G\jϓ Nn:?rLorm~l1}?Nֱ2kYPqm 8[>=w{);R R1 T tgZ,h":u xK숣~ !m>raUQֈf7L0uw֙ڝi{(o}wl h۝u@AwQ)cS#\QID.EE-\r e!P"a6:I"[0 m$ihrKEd+1Zje0P(('JRfٿ*1q>V?N8Zwtk{>M{d -B{k| AHC dϙޖRt9v`z6hQ%Mwކ[C=d Go#9qŠIW}I8o `01)&wE,EͨSTTmkDE%t60P\C.zʉ9_3K2m[ͱ +1ޓ߯9+6\M{N?u;1篃9,>>u={xT]ݧ9]9 2 Ȏ\7~8n7gz8#w|6qݩ_?c=\|ωWpie\,i/}[M:8DCimgp˻K&.>܀;!DEuJZ9!JTd*G~"nNHI 餕|c" dEiW4]oGWm@/N1>%zr$q(#@buuI$2c"*i9t4D{Rt,z̝vX-"ro0CJfG]AR/}Һ5s|tV)QpHA~L*WvSzz7KMg«H5^4N[*7I%^ ǿNnT4Rr'Rgt21\YʑҰAb'*;D=Ct {]Uy+*!>κCG0}OaPнno%y=rt+J*LJ@y#ؒ[Oxw!q Gg:1 H? `ŏBA"WcP<19 JB a0:"uhU"WcQWFUWŁsTWB~ڀM:mvSr^g&YMH! )`odrUv3M=&Pg!)=%|Y-6 (Zv>Eg33v`Dv %.?3b̦aZ9 < W3_:ihU6{7OFj9OTz:L =ëe<ٝyAd#G1vjc+jS#6%0cclSwyM+`2W=nӛAa (\R0ki'Ίť췓_'?n߁NҔ4& mrƍ57,#$ʒ~; ë4&CN[ :PB6;QMga^=%lvMͳ9I% &ϔ^2$@HJnN N]Q-JzEoET6[$7}tIzdRcX,ή^dZs`/} 03t!óu9r6x^䆰ݕ{֕UVj1HʠÏɞt }DP  TKʣjZM}P8?BU"cQWZ"]]%*種&biO/d ^iή,%<,>Iڗ/`Er DkySn%QY rBz%}_r8ۃ%dѾ%\.Z~cMMt8Y'[&38ak/O%TThI`љ49"cop?&Yɧ?lr, n:wxRlo>L܏bJͻe]z'inn_R*̅S߫W'{Y?`) [~u5}Tc'XTĎ95 FN(,sSjc&}2,wz T kq7gqp?1$1&33fYӚ_> MD^^v\J)n^GAV|B>/߾S9l9,KH ɘʚO=e<Ň#O3q81;UDS-au;sEW1L %CPVdDR&q:'`'H؄: GJqd I ( =!M`L>Yft9q$zHui'uaE%ATъruGiSܪlZ%%C]D[JViRc=@pk\Զ$$!ltQZi.O 1(fb^Zʅ֌ 'I;j`$bmYP"4lXU"D)5Z*`M;P O@jFC6R]NJ9{a'J蘊80s1DD!"ХTRg@ I9i!O>g;ZW@0R&0AX)MqI u$ +N,oT)1V(ӺNb˴N;PH;#g:uZJKnw?v8\T9sJK.P<"G]}Ot-ka⋊q#h{y8 \{&Q)1Szqy5: /LП^ [^˝-l[u$zqPpKZCM>k aP[BbR="sI`Lj>Y;"E Rr G$s2JaQiƝV  AyPQ <ⶕǂR "R*5{a yE,M#B87.(M0Q GUc0)Y?EY\"'+G2jv 0H HɑؠI)14` Rǂ 4>9p `$md)(G*bA[zCHy3&W> [?j?%U xJVīk‹0~+,Ma/u jƈ3*)w9~Mvi&仐Uj4ȀLYpfoGc_eDmv l_XA  ES70Y u^=VÀP& -9$UXI.EȝW0IW qj|UȤ?‡JhQB .kU_TM1Rɏۨ Ypjj9d&nO\QaT*fwT S_//Κl13 5\[k0eyQY o0yjIꑮچ!ÔɪVd7 >L~Lvd^rhfrO֓lmB2i1,է9Ϸӛ"vi̒Φe&z)k:&̍/ut|}~o1Qgߜkq ? ?E؇CwS?{JۿWK5S>ruDYe 6?ʁ9]EۂHՂ* B]h廨qEC;r˸WK]2{ y.bf =Ҏ "rM]4]$$?C !2X$%Fy-B~LҋM{c1gzSvu؋˯O؟vmuAFKf1:HDAs$SqSlsLb'MJIBIhPI4BKs+G1DKAY+T)s@TጠJB;ŒE dx&CD{tQkP @'7bi1g;Ja8ұb茜zk~Ӓ& [[;nSOj5o}nsoF>>qŃ CQg3yE`%T`q{(VH-a3_^^|rG6B![I ^(UğѤ|(ܻD@K{!uYwP-H\݄?kӒjɱ{9fwCw㇡2V]՘MtժBNLcLKuuRo$/6:-V2%ar.eKKxSx3w κJ¤({W3\51H|1ʢ]m*W_*Ƌ\4X [XV=<X21v4[<ͅj:>Wo/䷓ѽiJ_03KOLBukj5UGF{f#}mt3f֪&ҳvoY콨Hgo&U;}V]#|X3P}$,ؚw5W4̉.,dW HrVGO٨z<7!J2FE561*L9(UXF(^(dtb-PNlS6ԮcCGgBGJPOjk_y@$%BAO~xPiAU/U*q?o3EL'|O'Gp;牫 /FWEIiE'%boDtuݚ8Kzc` /t:Bh'⽧,ttmn/-jxo8taz}M 3z~h;nwfѴw?eA>Nk|qp?,EKb90՟xNiz42rT_"(6Mާ}R2RCf'#u$H抇(y0`{dz(}sRuHnjpuۥcK(=hi+隔1%m:gdJGV9kDN^D 2o`0 nR 0(E.?h0vF[q0X6rm7gq4<ڰ^ds3Z-O쿇_8n7-ڹ9nN#? , fOXn,UzeWt>lt131SVWznxUS- }U`JfzWzH+E;8& Y(5`ѠRKN*/+eagIK[R󁶮$ W&9 A(QāXI$!)klh6XRL}I g3!d>w QM=mu4_?ԗ_#LS3P1FoKQ%o2cs/>hd$$5jxLdAe+o_ < I@#1\(ad:6s%nCO Rs~rR˪ɡeBF}WiC ~2 ur֭G<=|.T{C_JjϪ^yJBQţe`ZBfڜjT)1|ZQzFmMN)?q) @* T1 iNn @Ќ ]1ɶ&NmM:ۚ8Y5㙇R'-rln9=B[zƄ4]7:B(Ÿ(yH4+J'%Yf-FC0mȵsVg7Jy.)6vik//9)ӓ`>>"Wo}zt}~Φ 0ϚJD>T, :!v~ɇؗM{e@Y#D44 R5Le{p6l,H!)YK"#*e٣Ygt8so/uvwEFFh}7&og6{M7wkY΃M hfX%.Res"f )1c`i xj1 ipi [p*z]B %\E*GZ;8-Id< *^}1%=)ͼ131KMUgi4}սXRc=_%AAAӒQ`kHQ4xsR=lrHu.FȤ(gx"lLeTKiABdC ˔󧆟wI'}A3AL ;^tߏpC:LǷD)-q(lr3Q7%X dHAqFIg*>_l Xu F;9}*t%J@_+_9GzQ <}^\1it-vT ՙ 3@*0#$kAK2%B@w82)%}NKiJU+[/^U~5P־E6]]O3>N^Zl:Rʉ!ɫnPlAVR&jK$xAhFs;:==9 1DF * j0daKꭗNխČBǔLKHsJp<" LIHQ%`5qvoO/kWPi7"iQ,!\<{_5t)E6HBOr>z0LgJpx! 8X(*RSPuR6{BeGg]/[EGȍ/mS?(P͹5]KkҷxtCm9 - \ۆ'?49Y+-C9ys/N KyHۜ4kh4DR♖-ܼ^_˽OGOSRY3EnN4@,J`58Q&^{Jj~SkT ve'^ LXp2 I?\K,Ho B0QIM͟y]6% :AҠ )b` Bt!, ѿȬ$B3 JW'fd4RLq¸ ќE@Ԋ'ą8C8qgNK~:ޯ8 BeQ$e)'7%ٜo-زn[n-O\ M%\e,ILJg#S* `z$+C]FuB08IHoM (HP˕3 C2Tv5qH4KK΂`PFWՀPD6 uI@k>*3$vX M dB7!EׅZEuꕱ!2}d d)!6% Q&{bʉi@%v)ޏbK1Z眨>0E2*D(S%T%TIQi6vA66Xm4w[Kq}o* [ryԯ5,ό=#]2Z$D*茍mI%EeA&u kUi/y|Ⲗ<,.(Z:ģ"R}3n =$a|& "kc]WB=c˛]E}'-?KgnN!%au4+J>$O3*2jOїJB%K[DcVZCGwڞ`v5$\md뜥#]]RFQ[u n-^zB/rMn-^zB[N nRZu m-^fk-^ OVvCY .!K8$rqU>*>*xrKVH]j]HZگ 5VvT ՙ țVWQ"hi\DNS&%VGm/iraU~5PN]z{&JǼ1>0B,3ˇ؇]NUK0^bՇ,$m#CUYUYYyеzvWE\= `-MM#X*h&h 9 J aM.8kgHCt9 fǂ=K }E7\Gxv{CTtTXŕ#(RN9c`D*m!RE! ϢRiZ{ jBصR:DR"iU>Ԅ xUX-zg՘,y,c&yV,Ύ7*@}45}n u9]k箽ߡåɲ!Qe>V$57w] ߹t=ZAS]75IuBGu1x+Cbul{X26n^͢Chك [wv~UW1;r3>& o7G\yl-{FM~Һޘ%jӯ\Ӡ]zT[gF9IU_:+qE ޕXJ,t% ]BWb+ЕXJ,t% &UtW||Kh9sQ)%$91% Ѯ}+sDvA*r"XUH@CB)" %n$NRssPbJ$V:d I ( <!MYc8I1δ`-p˟R1J18_ oXvcl^q^(JFh:zЃ%H(Z!>Ft@XࢶFۣpEimĎ$ Wz>sGl{ᝧ;Ld@Ιc-T0cX}BtVk;=ӓ0`<Ӻ̟2H1P,"Mi:8l4O7`8(s@Lq80_L63FK 8q-3ZlSfF"WCV3կ%]iƲS-z0%&JF+Y0İ0CDehK_k(/ᖉv!Őa:j0xP`48Gig@w<#0:h C2ṵ9 &ئ_6$+J9|%gBSL5.%=ERǜ=I.P<"G]|ꀲ^q7!k<ɣZq37JA:F#!8(V蔀Y ,ߕnx~Hf^O r=G[u$zqPpKZCM>k aP[BbRݹCY 3%HFɜRafXTqLu3!qݖ¸pơLI|;:A4qPp(fW4][PDxJWݎEäpCWu}[r$h@F EC0R9#.6(ŢvRT_&@R˄Dh}r@BH @Rqpm=ȑx}i$lϒw Wqq0]Yڛ ^>>,#/&|]8aoNfa'/BVٳR1D-b0"vP$+OV$)::bL^s`0Խ]S  N[lB!Zp),nwV#<8]gVSa_lj &R(i[_ѴG5+#EaZu_NMz\,w8>JJs#LfTwWŋgUl 03fdpp\[vH'_ @)Ci$<G5 CѺqUd9a<)`ZtOz1|jAG'4j\.5O140e2sEI#MʠenTN^v637~t /~~2{쟯߿:{ uW hіhDFI 6vZ=\QoɿTNIzhkC]0ft)OBDmJ\cDT9=Y9TѤiR[ÉowGtcki\/}%FأCӂR2Xh$*c,9!KzO%Y؁daW#F±a=JG'EA@(HTX{)&j90d Q¸B|%h$UPj1I/snTrԸ3';~2Z4*eZ͑)g5 ݭb{fR]jǖbHd8:Qk.,. wkEBj) c<0lwt< !J]BUdVЈyZ0LxzԱu9ǣzo0<S n *0Mʺ ȝ^:<'!vTdt}AEUhCpH6n% vZλْO\5Zխ,0qj Hd$٨A6tEt&2Gdx!\y%vV2u*ʪV=! !ɚzAzV*bǜ ޚ -H.&2g=9fYwtȺ^l Lq:q|RLa|'iQG?l2.TA'n8y|/,G,ܻϑ$Ln Z,|JMWМMFWۿs>Y9Ӌ~(U).9k&QjU!lק1&饀: ]becreK]ʆ.߄G /.E4$o k1kS cft`cͰa R=e vC1UN~~W6JpJp]WZsVnMlg#?*J"dF0+/y"kr]_wY *ڧ>DΏCyy '"0g;~۳ jol2p >zbMΏGf?>/c?6MfAU] ;ϺϟLz? ^t~Dyٳpk~<:\ '={_7yFaL?h0buwaȏtz'EЩϯ.`yďpQod6?f_}Ϟ8 0tUpqRe++W?L.`= Qi`gMӕG5Zy^4X.ԟ~mKBUh; ~/\*v$F0`W;ppt5P/)CWK^ӳF"8 4ҷO`xP$NϬ&_Kg1,AYm C#nd/aFPl[4 MA*+j쭽Cqmo5Lg[5gxT_ Tڎ0ocby&*/HVס0Pm,5k_' 0zwF}!O٨؞nH%A঎r Ĕ~WRHUXF(]P3ugo5?0gR9QтWwx;7FluR،faPq*/Rp/R+guL0Ah 7LKFZ}dZ Wp 7'Ѹ($_&Uh|$ G՛4[|ɱq~>-dՠXiT1{6(ӳqx.ƚ;'AXIWz}9ƺwY4ՊAAùlFyFE*pA"ǰ7w,`6(*+C S(t@Hm@^ )-*%Ue5pvSHm'{fRe^M:هf(&5 C_]t#^JGYZ̳\J"$ DwMfVTſO(+ &OH}L~0.pjs6x\1p7px}A[:u֧A69.sd; aAgyw s! p\V?T c>A*f)DxB涥i\TQP]8|T|0de՛`\PoT6b6K{ d/~{]yO,WY?{ȱ 8gT_j#9@CJ^TH{EJHz ,MqșwVj2/{ϳ؍ppwUz5L7BLwBhuzrR>ܥ*O+c?V u*'(ĉA`xB @xlG u{sܥ7[X˫ oa 5KBT!:eL]pȓ!&9sYF ȸVRF\Pfi^➕|\lC ш.*>J&P2{c^dcjw{_MnmG?y6"ˉs{[\߇q wٰӁonTz9s\&fWYt/ Y7 fJmOTe#޶yB 11S nEF \e\rP7Zـc΋ez:0m6676.yMtOh"򡽋62ǹTL hP*R).T.Ht)/ Y X6dὡhdQM՛O1Ţ6ՁO#ӎ>[ڟyρj&:qVil\ѥA;+ ˚At>ˍO,P,;N ;"*ACR!k$Yd&srIU P6l,Ҳԫ)` :L}+%R/ݡy|9&NR}mcVVjzHRoUbct9iUgpi,\#*ل*80}A؜I* $M-jEʳC+y&KO+k 6\cRr)ie[)E\H8qYh29rCaЕQg5qYj&Faux:qJ@A\p6#bcPљ T8d iyHQ7WAP{id Ie+]LNN$1B*i-157P;W8'ꤙ@3l.nX|cۺU'VRGR?~\-3%0:qKOi{bAԟ{~TZ5gA[s`I2x` *FNe hj59i)GYrA'21ѥhsgRԭG+p%Te&jXT4cS[+BW.՜.xIYcPziG_Ga͟|q׃;9xŽX CSK@'61ں٥6`{!E1RkF`4,`9̂]Ym(Xjڱ6v`_lX2 RyeH3 i x4DoSqJAT9p8+m\V!bA&C&&HMl}|*M;&j{ؒ0 ZD[""vUᚸ)D1F$8$,EKYU0"Β nr'f1mqR 9fmJuN}tdC &-hV:!D-b5qV[Dړ]=9jZ]Tl]|M謊[gIgjOf!RO sPI L"tvqvXa5pv7M&5rnIO9Ӧ $֬HY S&QV=".S\u))MWOXOXUR`zv*DiD{mlTЀrA,!5b g'Qڥj*^Ci,?J'%/v[ryW!~_Wzgp֥y3;e! | ܾ^[WC#Z%hMN.1 8˘Qx {@ST lIN\\vrev\b5j'B)Nfǥ Tv\ZK@'qAm4<>9;"b=Ufc]ynMddf'LE|d/}_{xUb ֣01NqyO-Xmڃիg֜.l.R љL5=Ah\.Fˌ//*FMu+GϑA?mMpnFE$oә%4Ÿ[6pw}Ra# ?Z~pOz>fl2O #/EJ7ѝnn4c}Puzn,P挜=Y5̬R]̍`. Ȁ,m80KyVMZa)o9.bRLR6~hxCAZn0!$W@pKJ0)NB%uWIzU։syC?'=zכ8]ItTlǩu ]/]+Nc'a\vh_+eܣmz@ ){rvq/ΦՈ@ cdh?" QLZ;#χZ-:RIUmϷO[~ Us$qQbJ\Vwۇ n~ s5oƝN'F?~Xyvi67ړVkM'o2|?Ol7?T#-'ՊB T+?vVTjB& Yx JF.|[cf(j mQbo UBcX 5eɰ<:D( ER@݊;'<0"ĸYd@лf6J#d#*MLykW%ӏ p~dd }]}>4mw_OnZn~^0jy%K9|Zűx-ü1c}t$Tw60%ҡ-P]o\6IHoat 9.i0 m RōC$=F \A2DF.+PKRYe, Zyz׃ efӚS[[5oVZ'CSp_R+r\ޖ+17w9[&-f7ȱ >]lN.o)6+6SkUYm᭱wP@PO2DU"*uJD]%QWDU"*uN{_qt_Ұ뾺@c!P7Zـc΋ezV{lr};qAG-s t3L D{ [:ΥbҘXPo.mJ!K2.Eqΐ5 9ZkC'T0E6YRZ#r+^x3"ͼJDaa|//9!k;o\UG?"'?$;+ b.kF .7>r@`.vsgmeo.=GꆲG5,K 2t9k2P-p mfX4+Tn<Sc?`oI+n*a]B-Z qǗL׼}C=oB2VIjJ,scNg \KG9#Gqm6'G G*Vux5qVqw08/q<;S24V\+:fy f9KD,ct>XkIګ ёUIC~58Ks{{{=N\d#eeP:A/teYM5}죁}_ԷǫErNDs!AEg2$Psㄓ&X!H"E1 sM3 cH*[I/br"p2$&RIky1̆cjz6`)gw͸4LB7{){%*ysۦv/~|!폇_okk6~r*/QFCڈܥlן셤KkT%˜2z6fJiN#6+؆%?oP6U3Q[M"m& -]c#2Qz=ej)5$jÄ.) )pvZЖq췌al`q_[nj #jslx7ۮLw7;21g_<Хٯ'bV&2ce#ec4T3W\b]lcӰ[h+kh^*Fٜ5lZCP%T c5UQ:~gr<=tjv`wlY[ "mn5 [BWu"Ξa!y|Hщ~e'Pxd9)6H^(r9~{;|~>) C}-"lqEܹ-RJ( 2hFԐdJZb/ķe8E7fVCcS%{:XF"he ql~{O.λrRs0)]t/..vqij|uMdʊ [Wj~7,Ik;$ُOF)ۀmul SQ")”H$ʊ0kpz6{Z;Hw1c @Pe`n }-fX l?@Uy99lYn,?_lGiV G9xx/yՁݡ'ekR2A+#fGBlufJgtHY*$$+ArcfDZ4\0AtT @u,!X8kK&U%&k\Ț@'x`Sx>;,{W$~܌FJ#Gk|MDa#O;?7~wxWbāȚC& lMV"z/ΐ\aTAwIw\!%~|dh lD~ɝQEgă!G8UոяYI~ds*\X O`KUgc.ق92GĈRKK18>9[Yշٰrr{%]r3mtYj@hwӭ8!ÐT^VH~nuN{]?ğL7(Ov*z|Ɨi/whƓъooouww9Et"z.a?REs5գ+0,\-uy[ŇIrU{m͟77X5oU/&br0?A )HWTC;LL:R?I礇rA8lRk8:X&h*$O(LUkMպ%dbf:(Fl+) \ɤ'H>:TP/!98=[޼+/P6*405g#L06&_>^jhm:ylxnqkameAb).[ šo >lgX<7 wZ,ʵY~HYSVb*\pZEc+#_[_RXhEdO1B!fV0#*I^gVޙ"Cj8w`N=76ɄnBV?y6 LK_E@WKŎmnNmCفh\5@w6fV"U)4{Jq>򙬓a6F AX_P%k\BԢ ͼ1p95p4Ŷ![z1Dl #F+ڊP"dEq)lJ7$T.t+ ʭ$*4DZ -G\RB}MFM)GWk֫^0ՔF;$z_kQS> @ΦݜҮdk^9B\:QHΙZB׾鸟~V ' : ?IRaA!.NAX9wհ#DG zqo,dO-?_&22${I>wדgFAe)!A*jqz#w}u 4./R@ Ŀ =v\k0[0 vS,^=zuH YSRF"譋Fؐ*-f岘a lX(&jg%D W$0вƢ`u"ppهɱDz :Y;lVd;ϥCH[J)rQ"Z *O%FG-r0(IG#+H2iu- m/Y_or,H>U8D.fm_o,z7e]w >+МVjJ?fgGs_t]FvZzKW3E?yhQv)Ѯ\ D@DX ,!WBNis*p@ZPs-J1$ %,6! sxbв6 3Fa`YPTPD3 S!f*>m='* !jdH>X .=p;͆RV$N֗IoutB_/77/JM6XKI51'T#zR!PJMBmYGTc5w0]x<ٱ,t5\-b.E^j%d|`=!8,PpdZ]L&5>*-=e*,BT22 NJefR`ùz@zA%rPs(j7є’$ ?` TH9cvj3X]a\b}b#m+KtZGI*WL:R# j'~#L']oDKd] 'ƧubRֶ#פ\*]IBaD3p b8|qyfckG[ ԦSNLnN?=lMe&t1 6 ٹKhH5TdT֠LD}cFDdlܵCQDʧV]ktxגuK(T5Bl(+z k0.Aq1W;a -NWGui7ϟ-o0f=Qf_~E"lє.]UdiL@l|r3*]yfM6q+G J% ަR山1RC?Wm60:r~Hfno,K_E#dj):R*R-`s2&mdS%Cڠ~J2 Jҝ @ԐbGCcIuM`?vR}\>;"@Y\YAVg,hQ8YoRD41mg*6T¶MPOe';8iΔ,\2P0ZHW sUKfDBMi`EOS 48G u*|tVO$qEЖ:exͱOܝʉ>FY"ˣ>_z{,>^xkHEb!e;tܺyy^O"_-Yu0Hd^sGIYD7G8-5o'FqZ`J i̋38EV[zs>'W/bO_IVW@R:R kn6黷MxڟRڿYfp9J~t,Fl߭~Z[/+b-/u6h4J)rSΙ`9r){Y>5OBxstkN˧؇cˋ n K~0ɴL޽_[ t6gq{%~'<S=nl&ƭע>loEϯN.vLΨ3jwInu\}+u x#c'9/|4Glh`8ߟ7 vG^dMG49{/g1Goxh}_?o?~/+Ο \05rmCp_?~ ೮-]c{kBu־9寓|/oG]kS(mBdd~D&z!V/"[D[,`J0-~utZzMۈv|o@Uo[Lw$Pu ~W+@ːoQ!U% J1A`reV[/it;ڰ.xftT-M޵5q#S4rm:8y9[.6c䒔ŕ~CEѼ5 g;IoPGrhӡ YúM":_ tC/+\U^KjA]eծO/K THAJ{+)%^2hCKv3v00ֲ;A;(1f% HdISl2* 4y3zB 1,ዌ"O6яA0<85v/\/y=?C .JyS&KhQ!F,﹢^eLIj+:p+!9 1 scrxt&H{$OfJQHQy?'\὏.Yْ)"y03vv`,,lV< wlxL+shzy0B<.IIyKxTFI˅ O3އl5=,GF]BXpQѼh9%O)&cьAD%Abp V53A_{^냕\G__StEd^k&u@mκ=$VAyG`1uZ_L[]*آ!}B] n0kY/0Y6gu8885jfRh&J bL)N37}+,z !kYFsxVo|d/ӵ.ދ`ִm#- :u!~B żٜA /]^zS7mlz>ߟG?gO/~ȵ "b)Ef|>C{?~6M~gAf_5=.x(/8=B?j%%U3Zo95o4r+}eͱ (̯v8}re+o9o;QO'\ʱ+u믵õj_QF*V1bb_bi^_[ R|Dz~#|^o-oy~$LfqSl ^[O|Z?§ȆC+_(hf۝kl%Gf}A(%9b4ʆϿW~ɢ|q!ȧ[S;}Ѿ@NE.O,*n4=T-ng9J Ԁ%MuSeʏLBen*wD 7wޱpϫmnI0?.f_~E~NsڄlyAVfΰM8Ly=h:&zZwqf`qúkK8+Y&ݪбB7̜VNvO_ꢳע j=-];4@Ǽx0.&# y0 4 vkL!L F Cbv[u WY6e}36.rZXQ8Ͱ] ?FzWK^>oopile7J(w`Kj|'Ge) ݦ,|nKpP+Pl@ hJ;HuOJ=@YNreUYR=ng|z8j@Z^paUPVî]kw0<0е:vH랐VD\\vn3R#%$/3÷6s"^yxv<dzFSrN|xix3qo&-ە7R\SDə(]g'4^zVWYkt|'x%x 8u84gMhzmo;W/yZmfq_A.瓋<~N2@_էWۂ箜?=ѱ:\uoYʺό 잮]ƳEW7"xMȝͳ^e骱@hUN9b?2 ռu/nw/zl\"sBJa29?|1VrGa~鞎gw{#VzGlizh9͗w['É"|Q? ]Uϭ7(V7j5A?8PIׄOUJo^=pI%']IsUt '=#949QDtT2E$rD6QJ}a02^ *|.L1"8OJ#AB*琢] Rx [y:+wՐ}J5u0Ur7L `;K ëW> j/ |0,.ȇB h;7KK>j"(^ܽ^[-:PB9:\ Ѝnr1qMZh,Kx̘%{|Ҁa@bˆMsͷر͔O->,mT"\f"2:`4Ђ(:b6 c]%%"b(Em1+jP=wV[LN PTpxa])&΁/!H)0:%BôqIhT&阣g ah MUcB!ԦHH3b GbSTEH3):!b O&bV@uFUvIy^^J&GxBKR R[GĹ` G&YA%\ ETL^8GtE:6>mt>tQoS0{b;݋ٽP\(VhXW5a7AIP5]Acr˜Q7yUL ڈh(t4X$ۼA<|cx,i8J!(x+Zz+1S"- C3/f8sB). vxAL&Y΢FxF)fRLqYFu,-'87e?&rsGyR‚&;f'`YTpFKbTDU)Ȕ^EPQCOu[LsگO;=;.)c qHQs ʁp^sp$K@C j$ PȑjS)Ũ!(gI IǨ4N5PXϊszWf@5Pt[{Q' :Lb)sh>h`,PL,ԩ@I>N7E<sS"Ơɕl"Ť &$!eA-?t~R'!?Ͼ;ru?C-ymIk&',6(,Xx% G1 r*zɘR6l* ٸ gQ*ޏFYS{<9! P`GM8ktHD5Kkڦ%Vh7GN|qҹBIRDD"$;2vFp8g cޕ#/;y/-> \3 ^&L,Xj)'-Ynr"n6Yd*VSL[/[i#R"%AsD2'T>4N+^ Ly#6*8a\74ƽO֕<"zKF17形i" Tu,&[߼NO#'+G2jv d4"]4# 8bR,j'`mv"u<ֱ͸O H(hIaYJd,G1 }78D4gUOpׯgzW1Q/8B~{~6\j4ھVցR:l_*aU19!a|Pw\EU ] B3 Tޅf] +t]nijYؗ^ڀ(f8./p_߶*@\?:/MHGw#1!2quRm)qFdDH"d|K9muomlheMf>i(DަRFkMM:2X\ɬ+' 4f4tQnwyY^bm~v$ Iq:}gn;^i9HT(g\9rANHئ^ن69n-Evڟfan׳a7BZAﵯkM]M(sv}Stjqb d Qa=؛eq,$%FyѭM5].M:}|i ǓISzSX[=dvVh,F#Di4Gҧ\I=3-@dc e/o^v-08`)-E'hgS1RDJABpf1SqT돛~h"ϴdkɲ?\$Wszꮮ(q}5un^7cAL CQg3y|gхM "N;XqE 0-tyT]~?MFn4݂w_}|޼,v40<ث}vz3ދ]{Փ٣+y'o'bwa<ﯿ?T:>}*VԎٳxkr ~N@>O>{)1 U8qpvitߧP{׋?sS_߼UQ`=\n95/o42Ffu441poV5~5'e0Y4v/dRCs͖ofg.'B'*YrhIJ^L>M2z9l/pc GIC^cWkͮ䄉Ce\h| Fg L/Q#mWn+ʴ:GzsqK1( 0VVqom.+/{Qy\sV)ߎF'ρb AWW%f9J̀c_o^ɧwLS5Syx$j5j9KkbbI]O8?hp._cܰ{6-TZ޳JTej1FE&(Hj]Pc&,1tkY0"$"lBZkVNVv_xBۭT(gpML%)_2e(6BdBI[N'Fbüі\n?lv<37FlL: 4Ek$k(H9)e ^8W8ڂ^'8Ń|;5ۺk^wcpj9> /ۓQ9Vb]ۯ`W, hi$xT~>}T<}6Ňiå_h|v+I9N ~7ix>Lvr V rN%iم(PZ`.Yd#gkzh <~Ӆ e.\H(h)]5C#V'jgý$UbCv~fKOKA#*߭LæE5w6<3E@ݛzeZe"^e/&Yvzpcv6ζ~s?eŁ>{WԚv4YCf[X/\2v9A * \rmP&qxm+Hgt4t_nݳShW*ă5 tJGVޱJ۴ܩ2pJ 4a#A2ߜ vvF a9iq\f%is7]}5uM\1Zؙ٪]b-](s kuw,h᨜ե!s-4q#qЁc`騔<`C:^>xݶWh8:K9fmz7ke2mפo%IChKKF2nf3^0WDīmQ[aߊz$ %\[ zQޣ\e畍ڦnbe_g'FT!@qrhO_<ƣIU 3<$G:.BG襤A!*OoEnTr+ J_ gR<˙1ט"GH*V|;ךnйִsoge X)a< R_hqJ*<sJ8BUd&#QHYu2LwZ D佖MFSѱM3rVLx-dot6{5~ O:xAٔ:`72xXbhxc% A{ <* 9N@"X$PNbXXH7/"gD f]3>lS:ux 'P㨪w}DakKʚJaJNQyC)G83d,2K&H`GGmJ%"z)2 %#'8>jK ^Ȍ ^`A Ɯ]uU:8c],c!bGb6'$eW7KW~U 7t/&9b(Frl+NNYB0T%LbA$ѠT`g͐ L )uRM޼|H{ɱ6$Qs-9˝R 3KpWMWro`ÿ>B Fs<7.ܰ<"ZosIf {+ƀ>T*x>F_6_2a`EcD2Y+냉KM_H0<.Go0M M`kҡ$]xpFiǎOtF.ke-e` 0GM,[5kknF嗝qiݩڗa_f6ٷ-\m2!;iD]xd !8C:PqjO#ə{l5v5ܔMӅgP_-zV]w:K/y2|h@cyd" 5?ӘHԯU"QvDj%>+MVCHU* U6ĹRQDSmQ#{4D1&lT@D% 1Y'ti]9۳(tJ%{]ۥ_ʗ^"o%*Ӄ`O}9R<=Ju+ϳ*VJg%Z)gb_)KVc&?m^ (4}է3 C , D?eYdS"XtgmoN>Y{dgO[ZKʕW\t ^qpU͵p.pU=Sl}{+N\$U5WsjsUC%\Y2pN+VkcwtҕNG?31[bG#Rz?__Gw3fjrtU:G+~oڱV:gwز̧W`R>}ޱuLYu[7%C~2@4h(T#\7/q_MK)%~Sܥ(}tn=J$R/MCpYݺ1h@>A "_RTn*P ҉1g!G̜˯?|@6! z+"]2㹔fujP4}TTtm:G[p0GmbY2]$s-g )sL" Hp|iyߐk v˴Jj7l_L?үۊ@/ʼ$ӈ4TB]^դ5|ь)TގF?7y}هEA=ųuq|eWcc{>Eiqt3!uwk̬oR殺̭KԒYFmt3#1SPz HW!y]W!*d]uBN} XA]֧f>]֧tY.e}O>]֧tf7@tY.e}O>=lb2~Tmx8"C_,Lf3hqDfS{-Im $B¡=d~ԁqq:,:y^h~1.RŎ<>ˀSrVAՁa!ZOJ2IN \.*v\<. 6}PO5:O1rlY,I6#~$e>Ͳ//^Lxm;PEJo=-}Jg[P{0K4-85\fE:Oώ(GtMmr ?N!h^} '&VB*(QKȉe)PZHr|t;[ ٔv s #%7VxCyɷ&,À!h]t[&:yR /zk{R.QۀcIx#F#K9d H!K(*˅GdV1%p$SUT$)9X+k|7HVŃflW'Wu2~o秓~c cUyr w%ӞXztȧ79c& YIH *ʨ`Vbɨ/$x)._bٛ9'^S{-;VM{Y=D~ͱxo_#wnJqD8jv*jv39C,{p6//{;鏣ztw, 7h;p{-<$~~$\aQoD?UMFioGWgGy͓xT|1eE];199U(֝Zu܋UGfoGڐm՝Ur%lw@0 G?3FΑґ%-%=sxg#J<V~=V 7W^aSq3[s6r\"[%}rl~rR"sZb[4zor> Mdz&F* U!޾ˢa'Y֜PZsB'"PQQ tި`rHPd0DIP`',7;,ͰmkaK78"'SGO^w#:f4wo;#76Z7\:gGe3o,u:nV!_=׫՛9niaݜ7W|iݛt{|wƗdeWUd}{7n .>S{l|KɃ_m|OMo.. k1fqͫvdzQ̙[_F92|ׅL~MHQ5((sԠ]OP$ݝ8I'G tw$=k%3Le !fى" XhL4 EJ6n[dW$֢2$i"ElAi<L Q-%r>*ʜ6/pfogooNVS劓E.IQ4랦HS4~sNs|H*&<_Z&Wy2,|a>4VϺd`e3CZ@G@?2h† 9KD<2Yd^eqV#.Rr^q!q #t"EW2. Zٰd&$d RsD |p_p2 E#l!J,>J2x@+ ~̷"r8, xa6, &)9! Ee5i g`ʛL  }m{j'P5޵6lۿNf@a:Ήz&'mhI܉tHjcڻjתkc)|Z߄=q * Ă[cR;ĬW逰 9HysQ b.,AD}S{H{7?lqHɥې }()K7 ~9 ]Nadssm> TXF)\:v&=T{0@/ôk*IQRj{h^b1;˜쨇aGH׬j^YYWZY:dˌɶR\o-U ЌBO2{VT4RzNdƨ{RAc\wtx=nܵ#bSn(K2 !X8AHcKr-Z4a"vQ{Cgc)`e^Fson8S󘌽J8 Gkd@w <.os&x5X &&u2փ:h՚Ox:r0LnO@K`^H`B$d(HHS& Ɠ 5& hV#䖰*3hUeǥWGOֆV_/W[LNq-e1&-z0%FMjV>`a1`1,t6;  o6Dn|eY0G5S)~{pQi39L:Z( ԑ=.bƹBY.J/xش!I%)&խyCHFɜRaҌ;d"A 8Py lpǒ$Q( P*{Wֵ""zSF17形i" Tu,&y$ȉHFmN`cbG0]lPE,$6S&k~`B3S JRFs_j'-AJLe8З3^ZaZX+p҂ )pl| xCݫ+a0P& -sHR\#sq)Fi4؟gZ[:%fSH*vwUzTM2[+Yj9d'?3'`' 3*Ս0S:+5Y,QYySx1U H\c}0QٶT/viq$GϦ y9C$\K:n*4uc1˃#(qOЂ,> OeK*GPsԺM6UH@yy:KjIs>O\zl;qyߣ^̔K20&> F &G0ptz՛Ϗ/ӛ?`^||f@ LK\UM"ۋBѿPK^xX7^t?}ȿ%~U,ç.EO@r{Dž]ESHV@+ B[/4k/apubI[rC/WPuǾED>Ǒfx`"-iFWGW6HN}$eC!DR`x X*-%.Ј 6vZ=\QҶ:xNjp&h{M4> hIKӘKp#3@}m`M=Vt6*쵵1@^o0YTz~KY ۹NO@+Rc0gO n|SIjݮOʨ?"4'-ܕ}`Bo'.%?}?}u;i˻;bѦ0^ N{QJfT-ψ3zsy[=yVv)MHVihlNsr:[6Yn.c -굛/_?/|za8;ꎪf5|3|wl۬$Y?<;F ]; Fcv \ V<]τg1S!eWL@ 1mO9z{^IQ/M%gGy#ȋ[\qpx`QVATӈ#H  k9D-#`N1}ø|Xhס都+bRڠ)~CZ?{95mq2]{=bzjfG?ZFKf1:Ui4G'fyy:AI=3-t8 exwi,oY c"M$8V`R01RDJBpf1"-m%Yo;XM M9Xh3 gS7Ho{UF1A$2lE(5o]YPQiÝ;[!qO;3>zC<o%.PaD&A'(!4b$#L0Lܕ]{Fzo0<e]DT`č #%;%tyNB촾5ȳ􅲞5-tv] !CW u-mn`-KmxR*j|쾄qj Xd$Qm&& Md)`!\y%tV2u:Y;:>$~~:?׌߿r"ZJ@eո/dg/gN5Fuꬎ{YWq5}f6fگ{cAvgk~iA%Aɳ緯v^?c7_Njo_f Qy?>92(O\E-v~1K~3ܴdzޖWqً4E1ɑ}2||SS8 Sh)Ws<^h]0gz:=qBYb'Xy[NuO%~yç 'sr,NQNb$rͤڣ6c q2h2F1O?;6_R*~ ~oxЧiMxP4ϬD@U< _OylDEmK ,=v]YZDHBjҭ tUjzkeLm].KPG9klbb'ɔJr  ½JbFVwZY7,I>hZ+x 8etYh:5#@ *{B\9| V@/@{Eظ! 6hnڲ]X_NqQJד:҈!DRA4fo iDm!HIJ:҈GHn\r*նUn<\%)L-+5pJ$-WIJc+ACh* UWm+ WIJ;zp%xO[WI\j OZwpJQ6YWI7\%q%Jn>\%)•fXm$[WI\ɷo:\RvգthK˜lO5{MuRF10#,B9E4o(e?=N2BT&»2ܧFŷgk FZƨst0,-{PuׇQư_ݗ: u8\^nl\0s7= B\ :HUʌ)P^OXied*zU7Zusqp%r#+5ziljY3EtX{nUEvz)ޱLύu)zCO5rlg4%mV5!^]l e{DﻇVSw.B.Ohw5nҠri{z =ZW_TJnžz-(y' npbf>IV3NOR=uuFp%W"(l+Q90-z\`BpLf\C5MwsǕ qeabPW6qZWG+ gp$ap%rmW֙JTeqX[5zbPԒ;D W#]A7,\kG0w\ʍz\@`)E.Qp%jOdJ]\+JYW"woE=D%WG߸g1^g\:pU ==.j W)r0盉Z7D%/'pz0mvz#lhGOx-ʺ]nZq[Ln:p&xyگ}[mvJgk̈́\k\ԡ1JZ7 DsǕ z*H >&gMSkgD%#)7& d ø+ *Wrn7\p"rʩ W"(eJ\!F1HJ|Ir J;w\Jk\!@`V< DJJT҂c@<3Κf j-A,#^Łp%ibP䎃+Qg.*C\pu :nm8sn̙gglPa(vf3%gqw7ְ;|fl^鵜j?Y&lv92m82QܟeBB1o7I;_:M-蛺i*΃3gOzmnF\\Af\\2JԺ0w\ʹepo1|n\_Kx{ۻ뛳E~חoэRCNBu7x}GxũVcR2>x߮^~(Y?|.!o 6ŖX2뷗ޝh~x#a?:o+w'5Oa2|q &EKpnx pZsw/q= vyozz&P]˼hxvsuq'A/(7 gIOYsc0T <Eu`Z:7wLJ 5v;K6Y71zrsKxaͺaƪšQvvIh7_4ʴaQ㎉W)CW)}ORk7!۪i*̾sT)Oz͚WW"(lJTqe}i+ +k(_{JTnܘur44`0G/:B\d@`04L1(j;ҫe(p7'dww@yB;T9?曓\o9w){?]Iw凇O^իWA!w A4ǀ?GId(_Ml;3Kz#?uIw)__Q޼}|֬ԕ>Iۻ=9xպiXC3rq/oYM*w?ot'>Sє~|OMǏxz xCԞr֜Fj`|">b#^5-,gG@QżH~Qy?{?[y 7ρי)0>a?Wh&#_==-mYN( (HmUۏ^7L [q<ǖ q璬5{ӞmMNWϑUl.uI%<^IϑE|@Mҝ @]߬.^;IFjsn *䬩IF.E(C49eܬ1.ԩXRU+ )jJ[X5O婟Ӽ3tZ}:PJuU*beg%ۻdRWY:Ղ6'Bj)VSmh-58U͵`0&jڠ5m:Me[բkhij 뫫aRΕ; TMw Nʵ@5M)cĜDhY{mX&35#bz4*q%|cϐfdnimc6J\u@=zAn,h2ѡL2:玡af.Msih*gSC/D7[`V:qx/F\Ttv2~NE,ZڋQAɘ3O9 H BjݹUYUtP-w򦦒Zn}RՐJJ\:'t!KO]G[tjJ9o|0 MmR9)V\dcWDFUK!P`ֺk3.m|&FUiX'k^,ˤjH 7R4Pc 37% b搃G)Vr:`=) ;*D{j%x!Ԭ;vaiR@x&XheG>HO2rSNc{5pQo)sXPmw Ee^ÀD)Qn!VϾJWA[,<[]:8քF_6k˺WBfCsduc]^џ|m E@Pө׬ozdZ{JNpѦ  vvdZZElJ=R>vJw* JiRTLd  ʄW{Ƒ>VFE2R}S:n3o%ezVXΨl&!l-@ݯ-V8!خh܁YWBnU{Cŭ0np(SP֭1PS0@ED&\ =+FK7uk&x[مN0t`a.jJPI!01UD][y8fdcFx)VZ!hrAN)8J8ӼN0 !gI;"JPw/uZ~ BRFAugoEQvu@[>K=#Y̼8G%L#XH/eJ>XԠV8(٧[,k $u{PH&n! έ>#hv#:}p aPndrAc/fĥ*ʊYdbL@T؂*b0F!&D̿9axn;ۻveUzs!7jGWЭ3?8˺ z` o9ZI|txK0upv@.¥7#J&r+hJUaİ=GNź*%tF\xO(Z V<@E&rZ%d^S0P>K L1/dx/:YpH#X܎m`CC.N5~d}¦E*SL d-PBةbM1to//Ý?>wj t }g pHCe2aQ=+NF1@sJm,2AύLݯpRCuQT/A'E5byPĤO"t4.еKHa\U8/pԀ o:k /5s%*\ ZC %|?Xvae%3:Ƌ  ׅX~ASqp1>Dp䔹N >VgW -o x3vvD{xu(%:ER޿VOlRGpqӔ;܇ mvoe89.K͠W| d,ԗ(@6Ϡ¥߼]Yʓl\MtQү5NM +[-.O-=6ݙKƷT- \rury mVxnf k1pՆ)߼jb(S[xtxz㕊O8 qd\i1Cq4~(4CcOzK k ]!\[+D QzGtut%7DWعf :Z+D QjEtut%%u\o-B)ҕRZ z(B\BWVTJ+kh+U>Ϣ+D QzS+% `T3tpE3sWV^]!JˉN,Ă۽`QWW4T4wut4]!nf#ult(=S+2!£x;``3D72 JMDO;(lX1gg3c^+}dmoEVH(wً9;,e2j xmY- ?]0̺߯>ɁZ6GY[_Y' t>kZCI(eUf'c޶: ;]:_lϢ3oey w9Ck[WJtT&tB;#ЕR]!`+U+th%C+D ҕH]!`N0p-kfъR1+cS-2x&w3 J5tB] ]A7F4DWتv62 \[+DQzOtutt%u=kgͨ+DO>]!ʝ6]]y+Su@a|,6}f8@z+;cM;svיVhZfNӈR?MGV;Z>)H/J/s(Oj?i/V#UG>d?Hc[w{ڧV=\ [ ]\+u+th:]!JNWR  ]!\Z+DOWrhG]}0+o.nt(YЕΩ]!cN Ux.]!`+k+D;(v,X3혳 {(Z:̋ʘYg%?7Gsy>W˞ͼ_h=jvc@\, ff\~l?HI;yΞ^=p}efjhhu.3_ otC-|9/'eم)DqeѩL_lƮM2Mo^͛7P{ q$l 9 3] QthuoShЙ[@o}yX޴)j ﻤO/oe{P1jy}ЧjGKA4~IaqUq$gw4|,_)ŤmD2HQ*-3B L e8IkKbI R9y s]8Ur@}aY\鲻Vgmy6c5R>̪V}UTUR: x不/U k,U|f'Gu.{,4Yh{⦞+hjf E iȹ^}jl}@c}ccz>z[mboauM,=n -rqHbo%%lI!rr]{kj!\9$N3n*SxG.'\tHx?o^l'<0B^~Gf,Lba4O`QxSNQV|+< xvb(P4g[PD芩$1J.V%P{9H7-1tuYL f~K?Y 7A6]Ø9s rWcwi;'T(-sP?p;|OPJ Fn-nGobӥv2a3^N:oBbaEEd2bV'̮ӗ{Xt| c]RXXXUHAf)gʋPZ"R*i3ty}fJ*DTfctI tUJb.lTT`S#%8Y8W\+ăB"A"@ę/ Ogc7wM W51kK`Ƽ""NVN`.uՉ?WSr>ș7* ]=8ykt1xS<{@ x cuIZ B'&܃=N'Nj|Uw_Wp/θqg\3ηZ_[^٤} ȎܾM{o$/Cw0`c>\cmy-H~~ ͺwG{ŋFת'L4l.W+u)v.(^Rwq27BR_Hu!N=fbX Fu>U訌F,TUE%+{^*hB3ՃN[J(*sq_J˘Sу.\T˚y1xuqZ .\R CVU l qU~IqЎZ^0o"7OEW'ߚr "q  UVf2Q ,'.3{^;)ﰱB^^|&FQ%-;icc^Ęw79+ZH,`d,włΌxHG4ۀlén#'oƗ\Iuow/OmrqoOOKن>W޿h6¨dQBi(UHJX4%wwBygJ%AsR롄 Ge,lei R֞8=c\/lB>/\ITmv[3"o+]vy9_AƟ.ocSb`}JI(G*y X҂75a.u.1*% BvgW#E'$*"9lJ;<L;ڮv2؝͂HZg*okq]L^vwTq\! sc]H W4J`495D&ˉ#(b.& ~&L1 ozDlqGܹ,el Y@e9RxAY8;6 ƶzDR뀱1$RL1!%+&Lb NBA=b3q{O:_v :fZr_4B4~_ܥ$#:[LUx RQЬϬՑ,$(HQQiOO6ӎclW}?|btFw6U,6}|c"螪q7쑃Ǟ=I`S՛}^I.FٔF=/_1zOUq'M/є.~L).HnƲQtah5O_PNƷ-8{Q  $dչxIR QstN%Rfpl>-x'y|ž{*dv%y!lyNBR9!DCԝ%J#(੶l' 'GN(Ė-Jgȣ,1&r@Ni\Xd ]ɜo)e*u9_Yb&%ɘI>I$ *mr1C#6@(AE@+q~,4}Y&dks~{nr.LL]΍|sI)?A=ˏvsTީS=*q_9$J ^A2'JB)bcrTQ| EbCw k#x%! #(QBJ&Gm%nu2}^s'jq?#4 X3Tv7W>Z`(YQQ-F6&mX d JfM2X[2qV \uZ9pdӱ]e$!9DˮЫ~.]$^1٨l-)$8B}9/_P-] uIλX4&8Ib IJG,*= $"e,} xEՔt!z׀NKʣWq:)dkRAdtT⢨7 !pTQ7 ؄Rݣ6d~/ake[upd02ӁREdn?}aXq41+TZ;]BTWS#OTŷ_]9%Ʒ1͎]-/ bd A#JČ^x2Ȣ#9PV`9A*h7QL*|1}x@ݶT67I6moZM>Ϳ i|{{it5?>ь~ht{?ǟ} |lTUn8g_W_©ʟZk+ZKٵ0zpg(RJ]E`e:5r 8 I#@ O+C̗KkW~ ӿ~|xq{ƍ_,=@-K3t|/ F?IO4'Ù^k3;7G˓"?Jx_oi_E-/Ztx>`߷6*Sp/(ܭvF|+Èv/ =YuoP7^>N~-6H;;Uț<$[Ij=>Uv'{MR%i`2Z`dd!PHL.$ k0 VA6 2W`95A"8U Cj}j3q1OOU8dotKab[hNM399"1ӻRjD E`V mQ/{7o]dodx|uib-z7SK48Fh/ރ7߼ⳌL ,`IA~@0+rSiNhs9  +/*1ss CEF}&уP({D_7Tm۳PCjp*,'s`S?/ug avO׋(|x)ڴRMkqzbO*5JWq[_[wֻy=$2~}ܲ:v;_Z0w{'3x<xA= 烧ޱe_6=tsl}rTG(zፐ:)<ۭP{(kHIKFjٍ =x_X/YOU򷮤<V?Z-&gx2[1Y?g&bH N:&a*ZXW^ϳl{#U,=B= G%cWrMr#2ZI#Y3& Q1: J% h7ʧOm鬷h7矆bb(PFK"f'؅Gl*90*mAp%HJنD5x!Rsd/93!X%O 6[?>t1/%߲ұ5ۻՠUdA{Kf :| l:z;!.R)rr JrYnoƂ;;@W\ESNDY#ЅB)6MF əZUm(A;B"DFNeI CߤE𺱝5@;ozEݥ|TxoEۿ)"́%d#3$^Gǿ&" RƔQ*K-CL |V/ͭ^L&.v6M=֑H)P썔ނ$C-5 |.!jjՀDҗח]aJ8.HeCAv)Qxk%"t(tJl`88)no6RlbyG#6b̄gB~Ӕ~=ޟBRQ7P1.fRZCV+c PP I*@Z?e6an'`_39dc[2,xS0k~kjX Q@QS nd.EGcṂySfru֡Ii )\mMjo/ؔ>(˂eM;E^{7 Z*.]xUJ{wޭa8psY+Y~mT/D,! %ݡWsƁESDik&pr' r[z3rHYr})dItʓ)"$ԅUc}H򦧴"$ۺ-rWN,}*AA)'D1(ꬓ@PJ[J}]~xmI9K9KS+%($oKYmؿc$Z{_=tM,D}qc=гHKz,87 2挺Db sIX+RΦ<.s1*Gr኏BeNJP)HN]"!B9WUjCjlH,Qp &:+a e,)thD)6%/C>fAR1³x!/rBs"ł;y6u*ӛ_.FAZra9|֩\Y5鯣w:HbڹGRg~Q-у3ONDuX I-ٍ8{ˠHKtJ3_V I%67v~w _j]ϪR_VO)Od{< [E8}9_iQA+hxfrn$&-ljw>h]UMetGu%2Sng~\O6$?{ܶ J//-+9Td W1Ee+}(BIHD!A`曙og{̕?ˬo3 ?\O N@;!e5\^UY5kK) L0R~^U|4|;:՟>8Fõ,Jlc35(#Xy4 $rFqD{e~?Pf?;A| ^>=;̜99?kXq~F^UAnᏭmܫ%U{@:v>\{= 1|ϏSC܋Cb"RKo`Ơyug:4K6x&^ޣp[D we( #O9UWz.hT\Jp =(ˏtqhIq' 87s=Ll8v\H 8&WÞneQ>MmSM5fJE6^,g1&Mh4SN5O8d6NwhEKOe/=Lvt3d!^GHysL72giO䎨77YÑU`z9͏9w|dp^'䅼t>}z˔=}2`||W@Ë?&F/U=bn]fxwiVmFWz_埄!fٻ] J4>Ա4")1{G#|Fȟ7/I =?"?e3kQ&5ߺz\xc7 D|=;gk#\*GwCnԿy*ΑS/|?A.`Or./7M;̀pg;5,a#c?1 af@43s ,^&+ʼ4umً )1q>PAV<{䝕QI"lx(&UԻSD@utB z._9A[9zx^Vkdhfxԋ1}fC0od'")Cq>TFSvwj٤O捤7RMGr54;YzS_ߋ8FkߵPbxÇd/CC HC Qq͵Ɔq:7=eY/lT\FCg ykBI=ǹ5tr}FHgcie8O!5?ŸY'CFU5 (&;eyi Lݽ{=mH2QYͺUX;Uo7Z6cIk.=Nᚶ\zh%mLjR=X@o6T.cm+DU Q ҕdh"Etph ]!ZANWRvttdML.m+D+L QJҕfOWsf jt3 D0 m]!\BW7LQڎ,*֦5tpi1h M+p~ѕYχ}Qt\R}vzMDٝi;'Ǎic~|6`eJ%?&OO^:4{5̾.Av? 1J ޭnFz=3w=҆Tܝc'j(Gyx007hz}b8J 6eϮR %;Avk':fT' LW9alt/5sEmr3t7lXҧwo_PKЅ&2Ƀͅ+uG?q^cW$Z9_ Y;8T1w;cXkZDWTPXk jB}Z=]&^GWCW 4:&Bֶlvhi:]!J:CBZDWضDJM+Di;j Tay K JB6]#]Indpy0 SiAt<[\OGާai}&,#X1uzLQx$2מӰ(2ݞI?힯 n0q؛^i_$*oga&V^*|x[K؛Hs()wo+~l~ 1q4DZ@:30Gi=G?켴*DgǤ ](88!(YH UW0G1+0_9&;%=4wĵfĶ.cLd+`.^>t{M=zZHppM3N8ūpXkӣf>q2'SgP\*# M6&71EVTFCLECB7]CtbRVf.a&VH JJ{UI u̵JP16c}Zg06D3 -z+R/]%=׶0`χ:gk"~̟z"u)˄]s޺}i:tAb\TJ {cK:VuUb?Iţ.>*EԴhj Oy'mpg.]֡J1gZVURKզ(ho•q+G1ƀRoVT6E`Au}R jxrD);G2[i[DWF BړD Zx QJҕU69`Kc "\cѲƧD@ ]]itjdRtƤDғJ5ץI-ivSLh N׻Jjoz1su=y<A?Cj_k-+>pleL+`(!]GN2zp<,Zt|-w#n0#\ɣgQt(WY9(fBWFpJO1 '*k]w)ݱu&]%[hf]:uS}K*Vagwe_ʪRҠz {#!,elP2g_p[Iu;ү/*sw*nu,g}םKaJQc{sQ;םV_}+Z3JzrGǒEӌ6 ZMҴPWcZr&ZDWXJ , jxQF"f~ ݵ:h-!twҕ06iW0yk '݊-k4]#]IƠj]!`՞sFI[ *$p=VuttM V={W)hWV=+m cm:DJoI^M+@ jXfl]!]!\Ӛvc9!wd@w6wt/teⅡoRHt) v(?fXJ%!$Ze`lvhb=^-oM?f]E0WUtKO%"7b4gW)6« IKuʧWI>VÖ2n+E)^d#"`:)gLh=_ B5[3CѝkkdNfzh2筅Ҫil u¦C}cZ [CWWt(5j.y kBF"t]#]" ae:coX)`0嫃]gu޶ރ~ssExwM W#`;h0&Gm!\o`!QϾy_g3qtonC74?D4c6;g0/oV3^ Rx ,fRFFQ)qȶ%XڅR;]bG޵5꿢:1/US:S{f_v36d;[K.LYr+EnA?k')7z3 +W3Vi~V`:ޤ3{p+ ,QR FOذNJoOLN2/L~*ONjy/MflM਽Yg'omz;Hx0N3#Q=+2֡Yerveֺ}5Lz]OY-A;A65%7y@[f3u3ۚ~Zd= Ǹzz@8_:ݹ;Fs^|8[z? 8Fhc\$~0(phJh O>,*m?E_[fB ~ Ȃo OMJC4qḟVL.{_szWlö8[dS~3,F\ ~ @GډLʨFI/)ڡW$[Kk#X>ԻOzve[UMG&[K/p` QQb*&֖i?q%HH23Wet2kBm`{)7 t^/}b4{K6Ͻ!s<3 6WKIV FeH+\Y3iMHņ.z뿦AnuojpuWXH9m[l3f0O@')ZۺxbPۡ5i7A]_fwp ~vح}x_Y4Ya=o>l+|N,+Gso&иกP]Uu(FX+,4ѰLAGu5/V//f/Ǡ4$3L '\*&Ȣ^Ĕ /R*XV*e:C* yZ_aa2ޯj~=BgR0.x3!μ#iÝ6zc_ZWÉ^K]\xE@3D@O/ꋋ;* ϚAB>7Yv0h G/q&!./EO5зyN9bީhL,'(LsrIU lfXI$|](ʤ5HAP0KnCBze2NЙz+z%kz!pa|lnƣ/ۣr_܌ws"wVZьӫ35ﭡ^q3PjB1VI\"ғ9\f.II`$$P<'YgL䀌3.9%Cln4wZbH4e}kkjr7wXֳࣳOd -:?iz!<44w_Sg镵TSv":7K-$ƌUCGBQdP'Qt6@ZNPA:VtDa !,J8  ɠB16WE#je6XS.C>G,%g29qw; BWFyN}60t)eŏ_/B03!hF`dU&ʜ/.dq)3N0$ETJxЋI#֒T 0k"gq1Isl߼6*bkЦ7ᓵ/)k/*yt}Ӧqmɺh㇛Q*)%簒=-{|4Y6/UbJDmZ2#`G *%N^[+3L8FuUͨ)@<-6e6ZeRvA$ `m WKfCmXV qơPWօӅGՅKJߦ_l&iv 0iR$7A25vΆqA F2QAY.!t6ιd}ݭ+30c'dB66QCАpd̺,!8#fQ;i<];ڦ6NMR&A, qq錎4d3٤`qJMQ琕6>H"dȁ(ѐIȢ(lIGRV:`-_;J9akԏp<',llZqF54bwnK6 d$3FFu\koI_mmU,gL H6:'\LH!Z@6FI7bVY#V#gFB@zq̅|ո .];ktV-2IL\H-*\t Ŋ!I ,bEVC!}>< [&lnk0uFm؆Quя'~V;VbnUïwBcԎpM 58,-sG {EA io4.1qLHD"\Iڵk<ߌ`JVBI?ROS[ve?b+ceM(#Yh2(ZsPY4Π猘#3V9jo+C]"2 C7B%) $,ՎӪF wCNRGRd/ѢV*tMEE| 謓$L:%IKQyZQ8&Ҿm BnCZ +cC6ȹA" X % ,t^T,W-c4add VVQIb\EA f P`*M'cg$cc%GC~?O~mh8w8:w܁FsevuOC~| ǍW?7sUJZoS%CuM(,5(h+tb~(\}נ|3^aj&ZV;5{uXNё1(Lz~:ա(k7uiRŶCZAZY-;:N9N{9NPk̹'[;NOxL18a\l܍*؆He6^6A h΄oTl؆+WݾgGHQ`'A2mT<#U&X+` `փSuϵ1%0YQ ]2F e@I>3W)J=pkZA ǣ:D.x٤͢jeYMak-lRo;57<%XTGQКLT @+#4SZ'%"]v*;76IvE޴>[X}dO,Q *lbP7f1'rHA,c5q &1 =Y0QT$ &dLF 5o7G, oU y֝_qw;9=zC57M YO]~]pz3FDq>%j[|<0ÐK1h*)P)': {Թ#yИv-|tR]ޯ KڻT!Sg!`nLQ-LT0q|HQVi#{h<{ޤ;V~Pǩڻ*$7!@ IpƊ;T,RV2o@K#+}wE\XZW$J=|#СvGIksofzVg)5Nw'Uh"I'q3ROA'5>7vjHƔ,KF>kL}+]2'͇ȉ`T! dʊX H8I1A*HZ鄒1&G+T4h h4!ddA2#$ :vͶ\LkXSykK( {ʕR"g\is%GPZvC e ^ ͵g2:o:F#!8(VxY ,> snK&Vh/GG{kln:W2Hk1\,%_BlXsfc eu,)wD@4:Hd 3Ǖfi%SȀB' qyNwUz NQ*Vwbs*bz=pPp(fW4]r[PDxJWEäpC{Aݫy=d9ɨ-") }+S{4-m{VWrxoQN5+#EV`ȦSS=Ή$909\QinɃ̜Y G%LNpp,tҟnA'GQIu\.9H40y4qYQ=`uSF7t*'~:י돿:W߾I~zqwxuOo_Xip:AFpa_^fw^jX}7?] u׋/\o>mJ3|/X9i$$2o>NJ*"suDVY{Y{Q%nak|\wKA(|d|P^D}+7ZG$^#ݬH4Xr Z|oQ!=Dq-W'[Vͩӑ06{kJOz55#,ANc.AdF΀+' 4f4tZ)pq{΃w97䬳BvmG !}g n;^i9HT(g${^I2J$l+DGh}Odayn԰q. L.^4_-DYn+<5h?w+" k9D-#Ɯ 1JPH1^XboJZJ?@ >}9c866J<t"!fwVFKf1:A%L9>j暧$N!Dh$Q(;p+!ea 1BRTpv l0U #E$ *XaFS?-G> CmVsfo}_ ݗ휞սZ\^/l=DS=Co}nj1A$2dE(5]SPQ;[!q1vh2;ֳt< !J]BUdVЈyZ+0Hx|ԱUc9ǃzo0<Sn *0MJ6 ȝ^:<'!vTdp}IgE54цˀPԶf׷ؔ0Ȗ6x"dfU]3NZ;b RaKmTc-%7146~<:(ĔƖe?6~,j H$% -& Md)EW^I" jC!k^l 0eЎj/0`ڴ9L;u8~Y:D}f6g4C޿@s ֣~~񰆠Y^/2U?NFӑ 7UR^'fG &G{UY),]pӭg|iᅤ?Vl){s2k#vD^Ą[]DqËMdіDL`NX pd٧+C2Xd7vb&~ ~o2 4>#3k&EKo)f|u.+7  Id 1SLA~\&58:5Yҫ6 aJ߰0g_nZe\vQB.wh6,.lۛiIkMj-9q_@uK:6~;{LDeI% 5*x=`ŕq]Ȓh"Y\kqyDr‰T4(gpML%)<^2eh6BdBI֝xc>i麫+Gxv;Vg# :kMɧY30R8Wq Ak% t(ߩp1ˡcm _epyG8s9"/Fv1ҋ0-a o&>@HY}EUP'ϟTuRO>4}xk8MNo0 adx1ݒ&fһ\ߏ^ Zo&fPVUT~{2Y%XtIg}yeX*ɳ: opAL * ]߶:eO"Փ}nty|/g*Q{VXPeTra1žAh\gʂգ•\qXt'7<h<7p +ͅ% }F9TJ,nJ <ˊx Չj 5m EK]'<3tny$XLjAhvF η mQ>qr #rJy5QKZS7GQ1'_*a_^q Mdmp-j &uD9hkc|L0&oauٕ&ݳvv5p=۾migW3igzL𲐴*aW \ Jj Ԙu+TЃbW ` J޷jr*A):1bW-ƻήJDǮ#,0*גxVuͤHh /KW_/ʚ /nSkb=҃n % #`8uS`Ci$<G:o4S߯fYށ0f3f1YYOx>f7j~9n~ɦQUH^Q #-2> Gy]z:0Im..%ڟ_:.΀g/;D>}݋`q30 B= o"@lC׆_JI1zY f6Wt~(;tUM"q[POUloYߢb "+7ZgIڀϋ49k"Tc91>M)F#17h4F[J\%C$8IjrEqZ|#1^lK'KÎ#DsN|"RoZS=R4Hfm rF܌&=Vtv 5vlضyeoZ^z鄵­c-߀L|1E.m:Li;y?ì6at2Ή5kdܘzf.ȚIwI3D 3:.l+i8.`xl$ȇcB_lV vYO#H *{sZ .UNF@E2aR/1xd!R5)#0-a$EV 1=Y3Yah2]*$v^mKEKw˗YNG("hwJ9V>:$vҤ-v8 e^#܋FH UעAf?p3tQwrڎzva;ْԆO\5茇cj|[$!k[h Yẍ́,5ṅvL\Qxx2'u)h`ZSu"gū'%`AȇhqI=tՓ;27N7u'￿|vxf4 ݰ?>RJ|6>>7a򟯳u5G>Tz_a'szg.&L}ܤTK'ظJD,[˔]t.Gc&E?e_Տx61RNa||KK_~D|3.˻8o9o83F{YC }.q<ηo=,\}Wsf|; ʵӋK5[z&۷3I*tFk i#z 2dN/Htd'Ż;Sͧk];7I6UQ~~7WʇEu6$<?l KgpW~K:"gjBpKD9Q%B1]|ew)eZWA(@؍N=waJMӾϙ"?L .^ӏ&ks[EN&t+s"o}+>qMHg&%[s:+x2HA1v>d?[WIκ]6M6> W}E# Ⴎ~6ɕܦz`-[4hIߡ@)0֗g7?k 3\N?7XJdݳ#w{Wz ♫W?MyZI{p JJٕ@AL%NM+/4Tү|8܎`Cot#W8o^>ێZيJ<< C˹'""" edNJlfWT+.3igT]/98c xM1T)b+t@HmojJhvb1Vٖ7Z#g-K irhncXtJ45U4u9 Ց|^wkP7c814n;:7 C{7 :cI#eLtu#Ew:xꯓ9,-{BC82$8F!`LeDQeTzWû7wSqR˷zfjzwtrVp^}*;Ce i(b)Fo#wXPEk,FZ0 *Nc!h29BkM̨3gv8žțacw cgyڢ%YZ̿od nvYҰb1*9Yٓ &Ig$}T9 g gUޖwu g\lh]є剔DgɼާeKԣ#Kc4~um]HΑ7x$M=>;zAGNtf#8IǂDU$sL3lVA %.Eef*cV2VjYcL{MVD= =$px0ʫ禟o'F?~[/^e%bbj 꿼0ݯt..XM"/㗫+1eW:4*w<K#]cGbrȬSfbX, <ЖWң2VIVVGoɃL&< 1W\E47u %\J anS9p"r.{gYR2fYu`.C~=|JadΉq.801tƄ2spmvQ{uڹ\G|[:9)A \`:fE,ErFSTe.62V3g?2*հg2ʆGy0-E;7?218sbѯ+F5 ^HX --Z3X!^*K AF0nC{.  6A+Zب3!36 !e9O'qCbyǡ+nNͦ(&/.o:#j[u& 3S,vI%1C$MX J$a"HEYFcv{Q#Zq02" w\60ƨC̖(v)Ki׌D\)g/H&*"ZSI'mV,F  tc(*#b5s#⧷'\w\g5/9kV0 .Ľ*1%a@L^ ,l Ɋ$| JS1pq(xX;ê{44<PZC~ܪj)YS͜lKAZ 'R*$m"(_$d͓5(EKpA:Q$Z&⾅D67<6t+PWfF2sR.pdv4&&Kz>;H%#[7eq;F7< -Ɔc4J2s # "Q7ɸ,5p<+b5I(S{t,ܩ1S7]Ro>2c2Mh:Ppބ ۄnE oSx-VJ ijv9ѬFM.$:;%vdbٰ=(cq>:oKi5TCrXݼ-sH>{{ɻd˪b8gKU?Um.q9kv rxP{ާn 7rvRR3p|Br3]`tUZ !7 %P>͋rț?і>d 6ptRxT0KO3>d֮ӬMEob@o!*tq7Y7דdwKbBzJBD'/Co.-߭Fpw=cŦo1n8垟/X=b"}l2_k?8Z 0sFyLj=3^刼q@~mlpy6Z~'qI2L:d8n$1 1NE0A#,QkR.3"7KL1 U3gal:~xpsѭ >gkp^L l)wϽh.}$gAZ\) 3ι4c$CAŤ%VQ@n[%xmHk.Z^M쳥?!{r<ׂ,6}s"}8mqŗ g5*(0L) C2*v%3٪VAS9őr@# *m4v"fF05Ɩ&92n[vC+1ޓ5snfz9 ?9_Ӵ4K}~w }rwf̠?!ѻOOs2s1Iロ Ȏ\5~xiw;g>xɗi/8Ƒa68j P</+R|)-sw,zYW>tqm|o]5 ˱NZ)ЍWU6NtA59l;ALrhDHmԐ,:BdMJ[f%\)`.*4`K YLLS)#=QDh-|NlNA!xxV廋W4~5`ugΥ,yWR cN2&-s!`:2Aggk'/˼-JJu- F{Gf&uΫ||'p K5*Qoj^>L4z@S28 \8 Z4C/R hԃ+2X y6pU}.pU0t*Vnv1jpM V(Ro,X W`m:< ,2jpUlbۄ+.1ᶙ|AsKWoGot+/D'pD_*5ѿ3o߿n4^-t7p?(_z PI#o]7>%cYWBYD?c=?k| d1qi4]~]s]bȠ_-ÙtK˦6twKmr,20̂ˏ?yPO;/<`<5xc,8ơ4 \sUVU҈W \ ޙ-^HFѿ]|?bN/M0>c̋ڶ4;܅dn1X곁bk :+7f~CKb^E7a?~o^ˋ^o@&+JZV*JkZV*\=>2LTx7J#[JkZV*JkZV*Jkk$JjTZ+JRiTZ+JRiTZ+JRiŃ#Ri"RiTZ+JRiTZ+Je||u-ۻ⠧Xf]`;tZv.!HL/~{L|$O<ɃJ$l9wB2`ӯD)mAć-/bD8AqD9,)x" ,萳@WGC,ǔaTRD) ^wV3g?\5K/ vx:bE+̀v?l.1o[vOD;)4!MYg%zeg10yR=sI)@ZsE@MO 7wLBZMGm98X9Vw)HD XE`AF'$ k@(Z ш"2aO٤/Pڰ73w5T۳I׶K*~.uXE0RDQ#t I=jQʝ`^dldy1z$Xstb) ę 4Y~!k:Gq&k+ u{ 2MWe8>)R&fy/ G)RwJBi(;*y_)Hg=1Se=rBz%R p7xX1҈{6PFRR"6~XZx~_r:+;[iXJ;GNV~ЛAoNJ0J:#J9< !w狣O,Xl9,VGJ<# /RY–ڨZJncil$`:\bJ'43v=e[RW_F˩O >!go#)yVPizv‡^h"<&dGW\ry,6C@WNK,6*2s"=)K9Rix2~>'}tv[Iڑ+ !X4G:#  r˵`ja|@\VFrfF,rH# beaHԔA bX`pHMH9[Ma%큝w&2?~rb)m?gzf{#52!?QHw2 ) 1Xd`M0Y;,VVWIo)f |7a^P+}Ӗwﶧs_cRy[(=Ey5B[*aPG3wjHƔՕ[9SDWp9,+U!! AY \h's99U]'I+PR2¤=h mPc 6`qfb;vC[l?ݧOJ'/7den s2K7}ҢMj?Ybhk@RJ -Kc4H T 5 .jInk[G<~< (4x 31 /-BkFSDQpIvT JB#X%D-D*`M[R  fu5 _KrNJ9 {a%J蘊80s1DD!v"Ц|QR[nx]wr09gW^s`R9l4!*c= љZZn6ORh0{ct?)u#e cTEҴph7nPP@2hEqa6Pm4׸P֕\WۻflR}4iZ+joYZbL`#,ZΥ aJpMjV>`a1`1,tl1"ͧE"|Ȯ‡[k|@8yTǼ#K 9Jc8wўx,a) qȮAZ<- B&0$R*J%O CRQz;f2Ha50cK (`{xSJs&'dBu]Jknɀܣ%4$ \{&Q)18cD囘:q G/ͨ6ɣ3Uy "Du촺r0[CL.93Fq889-ܮGx|V@Ͻ[L01F#1~aH0Z7,@h} 3f1|9l0j~8~ȦQUHВQTKjIs`gl6pgvZb6t*'(;ʯ37; ?~~~x~7_?uÊ/Yp ѱ~s߈q _~s-ցRz:4Kz?çRG"-@9Y 53Wu4!%[POUlwYbnb\w Qڀz>(6^"i is'G·#1O !2ѵRm)qFdDt``c'Es)s|6eM>i|"Ro^ZSN 1F2k#gۂr3t{:Ukk<񃤽c-|ő2ͪ.m~v`MW[o kez;IwbNAG\EbWninEA@(HTX{)&j9( QxB,7b(/ڵ; X;1sɬnw}a/vtw@2Z4.eZ͑ kJc;igG 4­h)c"`?FH9XTጠ-W>\5Ӏߦs;.[[;T j bHd8:Qk.o]UPQi;[!q)vx:;iT=4mB`@PaD&A'(!4b$#L&?to({߽{# |}8N&$I`wTDY9 ֤~/F ahZ4GC26MFQ۞n&"[PIFf?)v5KB,g|zt&2Gdp"+$Jj}B2!kq`/6>U|_ae/n5%-L8uV'~ }Q[^f=!pvտ;\_o=Ǘ+ NfZx6v+y\7?gf.UOO> TzWa;_FKzn <~ÕL7+Rl2;U:9>1ɑ}4|Ǻp6R6gRχ!?ӥ?(˗Dy>r_x:;dߋ>kO'@G3=JN1N[2s_ͤX6'*y =x\r/|rb˷/K3I AUh>F[ i#z 2dΪ7Htt7S"_ЅO34<][<ڙ0q 4{ f'E\o| g MWg6p,x7[v\ m^< IgQpf&(3_`Nϋ*h%S C5·30F*ٸfxn8@*kpאhM\(+PUgËpP}.Kp~2T:r;:u'ЖD%Q%Eh%[|c-|5ElSQh vこM'4%…X Тxf4yST?g?OͦAYL>-oe_-kB>5)zPo38wP*|6L -tzuBլ`[Kܷ4 ujWIfv+ʨ@EG8':: cfsmCR5w'贏t,-e xM8kQXkKg:]4PtF .y+xMb^^w_./Ft{]Ց3Sz44OromŻdie;k6r3Ojn{Aɨ3k3lfmos<>XaiKH:00S͡1^֯cr`^:>Nܴ\0w2t`B fQ؛mxػ ^;ӡ%=<}Cqnbo<[,R&xݶ1F 6HW(PVUT&I%Fͫ{tt(l5Zn![.g.Awx^28a{D cu0 hLvLZBD៑~1O7oH? 2P luɉlNX2c 6B;GI$SJm5hcҚ( H[XH2JS:KkofُR[s 90z#|epG(})D.F/9 l{{^wr*m^_߹1 AL6. SW: FPTciBc`((\$Ku y,8dA*_P8 k)Jz||}ޔp}~Y1CPZR"i"E:F[ߥ)&Is!E)HAcV%b *@5$oRK$ &T)x|T%X g?=MWe˱_~ѻ[8EO*wYYbsf'>"[t\3li_1c I6YHQDK{}q9l<"HCP[\tI ) b8_hr۰C|pew5d6g, 1O1U-},] }Eru-RtukR. p <#NsΨӼu8L&Eo ê(MEmI{/ki^:Zi2BdeĶ8Q (QzAS3&ٻ"u2EefyN}60tg{~9)L0A !L ʜL)R.+Lݶ"p^ o'25nd-SPI!$NiEsStW_QYԗV)ì]>YbfrrhbSs&M'WϗӋmdYAF2yէ ]']D{K Vym(E! I@t@T!h+ae`r,Ň,* M-EWr.^0-Jи\|T9PR 8ʶ?QBunm gelUfq-mlы/\]Wd%3fb*ճyZ:<<]bSbb}JI(G*XSY҂7ig m:Pid/d%tJj6uB}Q -H[u~|bn&Zmjjv'g YR\O:dp:]q9"UWdHƞaN6Rcb )afIc[I$Ha Fs1\hvl8aOWya<NJ>"`XIJ3RxAYxwI-zim@m $6sc2/J31DŽ@t0%fҊ28 Vml gEUGUsl&%EVvvq$dDgD* 噥:Y SSPFT"!`a6Cli}=< SG{Q_#cisNp㉼5H߾ª,l:Ӭ[vfnQ7n҅|.j-A'@8]AIPMQ~MyДӿCT3pACMOѓLj1x6S.Rj|3;irUFs)+)|68eRLT ɰ QM`*[Gj#u9_w EhѢ :OYVpu{?u7^v@ΐFej+\H>HQh@2 N!=n6&mx@$Y7A WOCx1f%+ȭIcW2w$/t+ 2$D0cXB _@} xEUt!zWkU/$ :vN:,,(m <0t3BEFZa1J21;#Q;߽ mmAvg;pwv6U.#ael%vE"=l6QcOAcKlJrZ1ESQ"stX\b)2OrںX?lmX[sB/ζR-,!XfQY/|>$>lEN>Fд'7Q&B@Fr=>ڈEʢM1#&\[7η*ȹnMՕkC /R֒% Y&r e9=91,)FZ:% @N!&T$/Dc=%arare^'[m_-1B{OL@э=;-grgmYdMx/[r0r\lL1vE@ygK*jYeHZx`I:cfˣ/d&RYʻry I4S Č^xbŔ"ſ} 3M!Lʯ_jВ66F߮ǹ7Fi5y=MFWܮgh<ɽ_߷Żdi^ԱGߦ}[oi8b u-^Ԟwэ:;R RЇVFW\0JYon`W_-0k_+?^ q:ݔï56\mq~忞6ZsdiXA ס+֋~dVhY^^bc8kT`zFLDT4ttmoXbV+8߻]yym>ѻť?kh_^hƟM `ӻ_F2P׳PhT2uD#g6^Φ7lgZW3OyUGwlw}:v|ȥSPrRn}{P֟z,s p {In8=3^GDKhHk㥼yjw3lQ1nw(,v@AwQY)ț<;ޗyEOr}򓵯~C%i`2Z`dd!PHL.$ K0 mhrGEd+1T^VƁʚB n4@9iUȐZ\l6lg*by#n[e7Ybc`K|4DsyƺAHƇ)^3+^%)B FxhJ|ٻAy(.y˧=䶤d` qfnЛoNoNÑ+/0,`I A_ Vt𜲖3Zf΢ӹ/CEF}&уP(AKmۊCiӥ F<ߞ y}Vl\ft\u<94}>ƥ y/~ww9tf7_ݳw-;nnwMW=6=i[fVwwO>cdQ6ss}wzGZ4Wj؈-*um&/wgn7?7{ 6&?܀"ɢ:%u"tܼDI [@ҏDݙt{v$ݝIwgAI+!a-KE4Xi h2m|D̍#} g4v(I,MI"y[dTc&$%([]Zi5$}iωX5 yM{n+W-P6^sT;N.0u,N*jW7;s}L=]Ä:>;kR769m#]5&$R|QgZH2d:q{B?-C.>$#QP1`IiNtwUX ).<"j,-9V#[>%VʃS"JrvAPHrFnQ k2T}yD ML $N>ڢnYŜ` j"Q!&b[iȻP.:{%}{Ȩ ף?+I&Ru.B,:u.sW E5mofϫқivYF Fo9iS>`tQݘŵw.ٴ+oՍ»l 1 } R[nc{8R * ;\~ 'ܓ{:k5wcmc7W'da%p(]|8_zw֛r4#:ɦ^Ui^ټsK\>B~t 0YlZ k?Poy5aA}S\@qtݏ矟ޞN޾tB9yw'>89^ $ C_[߽ҰzaTqA7uTfl_ڙ]7"ο;E=i@4mk@p`.hs.f~njp+bq>/H3>^X=)U|ujH[;4Ht?M$!N89 qFi4hKi6jNƆp)+}48y1RG]h`ԨF (MPۣ9˺3Vt0Kľ*"';/(+Xչ<ێ 9/?[pJH)iͼ(Ѫ /Q8iwq%2V\.4ɜbQE|R,jcFNs6pB #(d\T9[NօiMdqGowzNlyJ/ƈfNOd;h3E9㬃rȂ4q1V*[2(I4` 5D :ӎ&= N0ACe;1K8w\bSYǣ(by|4":.~˟b4W+uA~ѝff>8d&8;qCYYCs:IQyjQ~80r^EF՗zirE}^q}_eD`6@=*^ŌbEm"$7o;d S9dt@"Ea.|uv: ~\j|p-9uU~QvEoQi"o4_<,RW72{Sz=zl<5ӯ`Õ7_Yj-[x6q[QzϹ‗o'w7I)@Z@ܤIg+4ǂ8Jpe3.}K=~hsٯN`Qـ8.^a8 XMz«Wuͫ_74M8=3 G" elWF&8~a[j+$a|QQqcKu,ڲ*Q ,t/TAx!RЃ$![ǡף֛'zzn{ $)+0?[QK} hÏfw~NL'j~T;h0vPcQ[*uf"+jX\Yhx&gp_{7R+z"kG-$m"Po) ڟBuV"A$0 Ȉ+ >jKɂ!(D Uv:(BTN* p#=˘f ,E%U /jT9wYSV9JF_ziKw1223>-=_m wW[eSԜ3r*(b5!RcJ1hZyj d}ǹqh;>;č.i"7^O`KIET .;%YeeGR;Ù م3 KYf@kKDoR7}'5r6K񖅚$a1 &x* :HD(i4 w0ф@$cTby=dgz9 {CR4D*M5s q>";ؘRj jIӓ:=i%z$g%/Pb8HL9G(R<'G FC@vW\ElH) "y)PPw9mǜH@a.'Qv-"H[]p|-m|%Ӡ&+cD%C,g{ǑqyJ3!h.cax~/ ٖ;XG#t< "n>UEP $R%=o\ULl}s) D_"HĖ^z[&&y +'[0l2J%Ѹ))q{6*ZZ]~3DEp6)Q)FXBSlėͷs={hu^?e+M<yƎ0(&.o9H 'o W7M}=7| W}]wl^]N)3k٢9}y3`_|vh>\,+h^* mew=tPi\Ay 4*jC)Ijq$*3Wz%$b?@EuO^r*4CF5!c׳ԝ(iͱCGOf:::swY6%HZ ]<2޻(>*#)Xю =/K.۟賭hi9*'蠀0#!L$i,MeDnISz(=poͨXμAB,tr:+ʹIfQ7 \ J{#$ܩ<2@wIAԺ)Jx@(vC2rUZJ:28l%Ij͘ H2U mȹǣp<-U'j)*,HLjݺf*~U-dT^=:R({Q ܿJmI( H;qG#G]i|/GUM:F"J xȒ8*eoZ*P"Fko7&2Υ :$I鳽&AUI !#95Ƅļic޵,u#_av\6Ȅ"f3ݽ,xZSl_?U$Eɼ(VIeEdUH<^ 148kO03%Cpmyp%|{A~~<8HW/1}8?mRݨt>$bb4cIGbDV*:9:_@u.N~cnN D,U'q)h lJ#ԲF%hT}iyT#ZjmTxPYmlf>גrPͺnjҭզS)$d9 Yl응7I+{S jJ%i!chkᚤS#JrS)kKje԰,Dgb6`XwdLq*ZEfyҟ׫ Ki:S[LSa9%*:m?C1y'o{/ψYc:ݳaW; J)T[gݠMʁX(6Mc;[sWX{ZtTg! ;I%+c~e2]|pMmל ) Pmf/;~>\sD1OeD̈z`#n},!6R5TkY/la+~MܛEc/{-Ο Apwxl-־N%v;~a{UZhM[ZC{/J_|eB_L/J^lUZ2zժ_Ժ4[4]+<2Y+6^wz3FSHVzK8хk.qj^={C>*/Xq|7y8xބ{o䟇x&hw\^K V˗?~9BbN/|E'vRbX9Ud*V:S<ؘ_~W.,Z:|%8bsmcrЖZƋb .~x|B@&mm\[ _?/J%N:ʆ>=læW¾ZћGwxe1Ț7ސQ ?34{}a(Ű31^#g,9w#CG!g?kFA䮇!ݞP^ݥ< ppqpٿKyZq)Cn~KKy[S{DW]n0/t5zt5PR8wHWN[Gt5G_jUZfjtt|4"~ Հ̾@;o2w(;=2nBW@-:] NtG OXh}u(ItUKW,fF] t5PA]}tź3W[w8dL=nxPv&ŝ}r_8[U9ު1>|;O'˳[:%1ݪ2|]G??ml.}*ǗpeM k`Ņ6[kSm{sg7}iQx=Rw";,b^H>1g't<!<-j{9uWo3S3S}^9v)_9Hc5.ې#or[Ɇb'*W^h {|}_=o?_ x4v;im_å\Om5RӞIL~g&Þ,GgV']_krdySc.xoR5>ǎ%\]ލr`ZbjHۜ,pWg[vdk짅>ZIZ7a"EE8M-SmNqhiTwJa&j߉]>FO r݇SHXꝹ|Z#T]Ȗ}&YJ͘#$B#Sm \fh%D^Tc*TLKNkπpOD[3ErVlÍbv\]$g Hk=u/5k0sS`,YȄTHtԞvF]GD{jKm.5ێ03I/3ޅi9.")g%-ϰ`ꠢb | ơu CK.8Eej((%0r+q1ɠ-]- kBhcnuc(y ,&ZNe ְ )͍NjUf'*X%CE!CMhHpu BP|l d>RpP"7c>Ci:|W2Tf0Z} gs d ~-V(!Ȯb@l+!7CAwi+"01l a]̷@C;twŜ ݄ct ?HB8IIp f,O(ȹ#3m~.aV!7U-@Q"Ł.0͑ ,yGD),d~G@PS0Z` pp_j^Y*`k v2%3,FnMōgIu$dzh4vMfc'@ysˣmʌؓ긇I y bk"1C(y,LYƘ SA CJ@r%fdi9UGP=@z-`onf` ߁y!Kcka2Rԛ*Q^1b^X)¹a D`8{q*`vN^m6U-KGxl>&#@I|q^ "!2O]bɾ'h4pMQ~5 wW$Xw*;c70&`ઁ#* ^m.nL[TAr|-ѩdpG#*D )gSyLp}䝨Bw X`Nw!f+\<ZzZfǨ}v$1ge?!W!xvax%x|) xDyf=ızc=v'XE}(\ʭ7xW=1C+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" Hp++ mPU3+Y3+D+WRq\JI-4 HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$"Ջ D%LvWWfWֱ $:F'ґW$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\2%xK+ \ WWfW ,̑tQ pMW$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\W$" HpE+\*Bx\`ůW?_bF/zb(n PTJ-  3a؋va6y ^PHcЇ} 7DWWr ]!Z9x9+$g=FW6]}f ut #˕)w6{:v]80)e矾}=N[!~wtEAoOUKu!F7K K{'6x5uK02Z[,p|?\­߼]!\/qr2dY:+nKӘGO뫧/%ԭIVf%㛷(>^&upZX7jee R+۰~+)Ʈ":%YժKΉGث[?X){v7>Q}7f Úksv߮8'Ɇ lzt(!:BLi+4cB OWr{%:´dv ; ]!\Z+@k:]!ʭ7+WҸ&+Ug5_Bc+Å-Zm&DWHẈ!Ff ~\g5ft(#:Br2+e;KW7 "Z+NW1ҕ̉5]%]Yow̎;Tr ʻ{(hun(7hًȭNJ~OjōNJ;|ȵcmB ̾zf z7@y;Tl`PvRtos B^BWr9tB]!] a BBW֛tL]!]A$(K͞KWX]!\m[+Dk QnyDWCWJ uCt5VڡԌJKeZBv{D:6xB]!]iW   "ZˇNW 3xtevM3tp7NWV ޻Bc+-]!`ǚ+N0ut(BW^IgLCt}Ci7Ȁ^ |( %]9&~xI-r 8krR{E|voVhqO,굝GGvhx!n͡vkZh'0 p;|}+uKW ]!\[+D QjGtut%44DWXf f :3tB]!]Ihk7:e7R]!])m VJ4CWWV њ{Wj#+(ʹJ9tBڏЪ`v+lAD)Ȼ:FFq+,d;;WV "Joن O "Z3xBb pC;A@{ÅwC)hg(quΓ; YWXo <]*Ù͜:4q4hQASϊwS6t"N,DTCW+{\y:[?޽{ 26O,PZ޲u\۹zO"ϻX2ƍ-(l6%鸛( LgyYՑe`DJ&zWE21\㫹Y^78@[:|얖Yr9FaX ~ﳳ 9R15Ge)!8,\T˚y1xUA~Y?dxA6]dl9a^\akI{g8_՗c7m~M9'7az; ~RZWf WעIIu2n< q׫tN6p/ppapyΪ1۷{TtEBH%Ŵ/<'(]9AY} & {)[UԢŃMF@$9KD$VH-SD̥jop7\5yvq|!>p xg'k.[+!r+Y[yYS ӏW}TY&; 69_D{\]{$ i&i]ןoe׏Z|R3-qׂm[*pŋ9sC 9Qe3{eFl͹F:$r\6WDժ-p#Z.f<Ό*qao/P(na iyWkӇ}_Ѕl'gZ-nDͣ%A`%JKu(S$+La{]蕱UA3``2F!Ҧ46Vl+0s RYg 'y+ kf홵-6ylF$e`ڂK @eD3#:bDb%,7Tf!NYɢPf} uĶEq+#z3VAl\Wc9 :%*L19UL1RHZ =3bopgg:/κZgoV/SzEOH5U;J9&#BEk Ip l"//{}ɇvMe>b1G>6ي90Y˫]VH*޽d["Z$8x%4LHM4մ*5.ș|_dhQ7Bt1^o683:,e],(l>¾a/?E@I0p6 L׏"G#ť5[|~,M8C 5_Y/P}2:~TSyF޸~|8iOI ǗH{IDžyzpy[V)ٷilt}pbVtbw5|uVR9~}B(1[UR%w1PR SQ+%8Y$K4ywxᷞsli;G.=/seKw{}kC}\ŕ7PÎw1Je„^#s" `^: {LPc\&O>n&iܜnA/h$3zΟ}c=`➉& 3t|eVݱ{ljOySlyjڮ^t5HcӅUn2K]uZe}H3oT,A,b#8ykt1xS<{@ x cuIZ B'&w|opwYq/G ҁK1B#AL֛E׀Q#EpkdoHU7JXI_(~,W´]|g7J Q Asuy1+=pf)Ʀ(R h 4x``V2#g~ا wr-<ꋇD浶X1!d A#JČ^xAɁ} RAeR!bG|W dݮ0;*ih#dv^i||GW!G!z9zwyGsv%Ҽ8g=G;ofAm勺Ymk尿w+^#@š_~倪R#6p΅ ׈Fu'ڿFyoW7iiI)Onӝ6.-ЬUhț<ǀ-Lђ=Kdfh0EE-0a2(H&FGW0 ߂nA6 2Wb95A"DirҪ!pf`xy>,'xpv1_ht=0oF9xH g'P7X(39"{w5%X""TQV f( IvHٴy:lrG˨ +Eovۑ02xߓ#ߋ:0I&5`3bsSD x;{.uU U #YD_Tbc CEF}&уP({D䰤Z[xֽZCqRL|x m;^‡7_A<:"T {\0o*u*Y޽y~ngӸi ߹.Ssx-|w+OZ޺szAp3CAMo;^~&g?w\O}iԲ_=Ϛk>z?DFn~$n^ JU/or?-Sh,LdQ@dnXAEcI$oJ|zd}hk8d_!:Xfۗ\wu焩W_0j&:} bFz$jDknX]6O;>oˏ҅4QkD:-n Oa<ǤAϐ4﬛<wTk#ACMWdrP3A@2NmK^iirEHcVs) D|68eRL(L8@Feƅ;f_ùw槿8Mӧ-; $݉R ԑL4mjcWlɻc6Mhg1CрըLQ9CM5G8 HZFsb|r4ac҆ DR)58~kY~ĀL6ҫw cv^wކsAdHHZR"@˪+t ]$^0٨,-)$xG1IaɝoPY.^hM$qx@\HY P*YS5U0ŧyelH2,,(m <0t3BFZa1J21F]R~#;}v?V51#a4pyu'3U.!ae%6E"={yMP̀wa'&b鋦E8JԹ`3Rd2 u)~4mgh!!@x#oK@T _OE/g1Ĉ;] EBӯDD  @$u BHʥFv0C!X,3-c(z}d]XLG1i"e-y` 1BSQ , AV Gay aA6Gի¿tJ*BFb$BL*{GH ^$0zJ,ˬO5[iCJ./)v%{r+sؕ7prwb69`G.o@w]nyf40VX0wa%Ɏ⛋YSgg-@jf^B` Α&Z$R|ÊcS! HC6BЌzbOd|X8qHD odb"%g5joeB)22ՓOY0Voph;:S޿Oy&E ]mV;41ǣ+/_mr&!!pOYIXg5RA F}y]SoV'CHda6{˴ 3fxtإd:mβd񖑻z>*|9ڧuڧh''=uu񾳛3 my ֧XwA/nMLN)$z횫u%7.j;evlx2զ49/6N剜/c3.C:&ll:_^;^`uW/? ߼7?w?ěIG >f_tW?utso4mjz__eyTkU,K.$#w.&iDqOa@4by׻X-["Ô;zV~YnUb#=x) s~Awh^I>)G}#*1fw|*."ur er: M4e5cufN#GzncZ 0DN:TךM+kα {JHQqL#!b>&?#3-x5r3(7o#1ͤ7nêm8n; ϕ}GfP~m+ 0 hLkrɺΎ14'͌cd{.&W&v^IË}pFu[6m$yT"3yU.T%{Ijw.-nhR'Rv|@ zEP0LR$ri3԰u!]Z'kpRL̈0ڈ+K-yLUPrt E%"CRCɣ>/]=^7pef qx+QhZKg5TP)3*6^2]Zij]-mRwz/˼雛k8@C9EbCD_:_l,~p+3ΪB,,\5@ɝ: PU`\'CO7YLoWnq墉!ڱЭx~)\(/lM_duռ \zL6@z]}Xz,h~EoX'M=6NرzdFT)ЪuKߗB߯E9&"ٷ&۳ן7OU^C?,mSVuv5Ղd!Uh`-so0t\&z*+^%h4[\|d\%=d h4^4~oW^X&o|V!Ig5VUٺ [d3ƭzqrguck6g: ڮK|tͣrݭ|[xx۽'{Zo.)E@&t<<ؙr7u ?`S6[r~PqkNJjvJ 6WwE71w#튂jůw4]f,ߠڣ/)"[<͋_>\vQ]m^!WON fnK膚/(ꆞ;6[6Ow}hpkĒŌ*nnBw\]>ar#=NV /h3tbwZ7'ǘɈ35tWng_~6.(,X,V8hꭂ<ET3&K+]q9ʳ!Nj:k#x1YBYqf_Ό5PI13zht=3z)'/ojț'H~*-m+eRLyYJ.b={g M|ȁN:Pq-t1XQ*ih҂,+/3SAYWSQSZ >8*3^˼8x;WF )F },{~ H oSX QgO:_&y]>Gv:Cw̎ܡ;;dGt])Z 8J`.uZhJ䶂%=KR(ıU\MdaRThV{JO]'.ZVe(3mfQ*s+UY=]\ZFKʺ  )}sFۂ9uI/-*nJ>WdIageL'&ͭ"7moOo~^X~Z옟"Zܰxx] ~&=Cedx_۟tL>q۶3j[͟]O[tq@8q=r}LgBUlPv5'W'woW5^~?Շ n?xV=\xzZgVyJɘTk/e4x/}K^7mT0'_$]ī9) j3%NP7[[S^q=lVk=Ye2_)0yծm~үs^Z]{(dO)\>/ y~]\_]Ժ=6Չ+ED$+SgJ GbG l̸>@֌c T(>0 | 1ڶ893! \`Âɵ,\Z U:q5E\ F +*jT0"܅+RkHe/=j:\ V:\\mCw\J"WT@RT#6  HWR JsMH VPZ!#&+Ý?{`g \ WsWR{"+|r (\`Zw*sW-!C 6+]`p & +R`pr R( p \L +R)`p76p&nsFZLE{7 (x-(Nu2=?PqMLyĚFͦQ,RʜI]V)EeuYzs{={^gǀs `lmz@McH?,x^fytda\TWJΊL5++(s)%Wb0v&\5nZ1RMށ*_E.%0:qtvhspE- W$x("cD\MWBZCH W\ H-Hq%$0jF\=bpE+IsL("`E\MWJq6 \`5>qE*!jJ `[ Hc J.|qjjpE W(W3$+R)uqeB 62\\-C5;H) 4W6+y8A+HbTw5E\9 W(بppEr.9L>TI 1y?L3Nb`v&=[4.`֓{&4e{cI8I 0^4]7~Ayʁrڑ  S[8189 $\`}y\eB5w\J"&+a8WH6\Z T~,Yp%6+ <\\B+R)w5E\)R+ <.1̠3qA֝Y1hu8g,,ZdZ1h[7mX~^o:H*yLYBUx.ZBqxfE-PU@us)Q8喳Gfa77QnIsǾ9Y~\ r>o4['/_ˋo [kkVhw]mޫmLg1 $ҏsUݵ1i~KɳɔJ<"We cK7'wln^=sGF?10L0'u# RkHT &'z.57" \`l0"cWva*{uU"+!pA`pEre0"+Rw5E\IŴ W(HW$WPpEjWFjRJ(lT0"V+R {WҷbBWς+2 \+L("{T$qeU+]0"& ;H3Wv!  T;w\J 3  :\\cC߻"ѻn0Ҡ!WeքRm"4錑D/uq5Lޜ͛(跳Cv^}@O֠FhZe5ҏ{'۪]mI͗o_|Yd,W'Q\\m7@5spY nx?#_?=o?!fD\M?*Lש# [UiY}|,.ҏꔥ(>y&9] 5BC]q4/EM,x. / d< tVjNDНR{Y~zY޵,qd_ƞ4/D(f!k^ phBO &M iv5\rA]}2f;~5ڞO6KٛuU*[G MA5-DtVΒv(͡p}VB2%ұ[R ]sV妬Wr)fͰxc4j>cSh;BC(B%I:,e*:6gBIV&m=5Quali4Qh) mGylNPD6%K }bI5Kc8W[1ևؗ: kg 끒rWj9в=b.;Lf 0kD34p'-Ռ iTUiԒϯ=̔<86dڦbҥ逶{І~¢d,4D2R 0; cYlnc0j!SU7*)ND^W}&igoibjSEKG:SAd*X ſ!IBާ$ͳzM59ՑJj MQ Ar#钍5wLusaM nN';Yn)Ǟ\1c##$63f Ȩ*OM+ǐܜ`/DmT/ *Z}ƥ/-@M|^YbHuEQCBN|ҧyR'lŜ-HsAEF@n7((ʡ7f$1P #qҰI&}EՊX{Buf; %^,KTA BqŘAQRt8!a%Gugu=819jkU0o3?8;5͈ G 1=*4WBϛБW%sJٷUe ìcJ/zOG3}dH֨Vx$ ;< cxS`QՅY,TG7>Xżs mG5j%eD!v"}IwQH?wyv]8r VL%V  XF;KrIn ȋY TɣB- ā 0H HnLCQ cYۚBΨh Bؚbut  R(VWC+PS`brjFF^ZF!/D(DxQ:Wn4X>3+I@̴LZU ה!P?Amj DQ;Pf'0qVcCe t׈Bj(crf:PF._tX+,*j$k4Y,p(m@ [SW`$XHD)40\G÷h5g jƐk9Nm/`zkr d߃7tM{~k\dǧ6 Pw]< pff=+[{ XQO-> U-iVn6fY#e-6n(ǧ= @zVF6k80)QC^"$u-ҐQUr#a?EtNJdt `%.(H H,@z|!(!=':T}f=. >"$uSjCn ẟ,;:YycaUè#w(ᲨHitrIFn:T,~ xm,mT11mAiom-߁tfzPkFQ>3T;*\Aj},ޣ~]ےYo5kPqab2|ƛk΁ >bѫxdPi`EHN֔70hlCi^3 _B\ RčtXzGIzӠlCi#8 8%oC)F @:"@.* lnT *˥11DL0r";@jPx*yPbٹ:ՌHPQλU~Fz+ZP^) cV/flovw 6<>|Zˬ( k^(OśJfo_yetpƭ F1֎ ̄[~hxo>.~s vmg ''S^~n߬Mzqu7݇cl\]xnKm 8A4kn/vhuWڝ߾d|{[7ж˃?|z6ۺ٭{k $&) * EcI#\- ~9N Y3NJY:PqH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8q@|ผG1ܤBѭG=K''8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N q=_'JiIN vf9N 86' @Qy $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@zNH%988@n1N 5;e'std<H@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 @op`:z᭦~{svan/67]qGMv)%%Kh1.=ү>zJ ?^rX2|1{V6gC fVRg]Ÿի|vwx+ZC7kL|izdޘx_߳RZr;97T}u7l޻- &{i/ ԺUm`ݨN I=yZ7skk n4wgNGpG鬭lqu;2K^ڥga+Aq~sZpCZ)8Z s-G^w N1ׄ1c14CL-XbV--NWgHWFEKzAtŀrÍv)th>+$儮!]Y +=u.bn1th#=ubIHWMiAtɚz)thᄍQZTz>t崣E+=֮+kR:Qz#t 됼]]y>s9kW W/}w%s`]o?pMYڠS+Fi#]Ec_ҝA*-YO $*>yAt b.. Z9U 6dEqkAv+SxH[g֓VpvV+ۋSmߝuˠGow1 еmz5\eZU|J1om(t_5p0( Ѩl~׏S\Etsguщ9^wMcMUuq%Ȕk>\5է !e|7Vaʧf4Z[֔_ƚGyajI_A-\= 3_:{v< O: ΂Tg;jkGb),{t(ztO~Atŀ] ]1BW@=ub: ]=Kr)VgAG~vA`tDNz۞7>?o}|xI|XKVjs?ψ7罢^gq aѧ0_+ZSۚcn/Nv}wG~նwG 9zy8 |u?1M6k$wN{6K(:CeP_>ݿ]kbƓk?{Wqlqd]d=Ya/cVY M[ ]>:YU& |S5P5 )`]sO+%"ϸKT?\^ek#4p{<ڞ b}{`>,[v'Wf7/kdk]k-5B)Y/ޣXS+3*pAw-oRXA)wHWZ))uE0WbW誠v*(v 7+ (]+3tU v*(eo G*[NQ9Sˮ5J ʵ枮]v);=/({u VХc\X*pUgUAtUP{+W`x-ٵKo0Áņ5`zp\/u1W{b%q܇ؙOVɩq(ev8a(yF↿C誠tUĞ!] #;U+tUо{UAiUOWMC龔 `:CW.UAv"=ҕ2 L3tU)Qh%o1X=ҕ%2YF";KmRtX-y `Ŕ ]玬~ZzuUPҕƈ.]`˺^r ] JezztvI]E& \)BWjR{A{zt1؈UkUW誠Ev"+7 !5\ ~)8,nWQjfu |}O^hU]yUQMp}CS'6̙rCGZ }Fwufi} >;:!sMS8VU4hʅ8͵ %tV)KL,EF8%8Df2Uyj0ygu4d?G9Z{>u=w/Mf[Vu@$><|~rԗF?=\֔_Ӎe1F0$M\nV`їV%yTDZ{ p^jLoǟS燏Xf{7"w2zer?Ʌ-nUAw8"x/q12f^IJ4{˔t٧ &z bx3t;Y򆌞o](.K.%GS=bY]AojAgb*s_.RףU΍qzxwpX,Oߌ?ca\qEYDϗ+R%\rYfq}- et3NkJ^Q] OMKFt=4+j.nW1KT99.I/Q 뒎f5|I?L3_dP =ׂ_+&0r͹US&̲Q恾ƓtiT/iRH4`DKFK ]H㙯b>=^aol0cZ/3GNy6&U⒝&IWf0TcKpFhVh"e]$m8IkS`Ag'B.x~^ZI.E6[Ĭ*t,pϋkrrNfU/~6蓵ixVs^K-(W|ǒZrNpd2L;O$ae *d06&<}nntvZ1}ڸߤ5o$x.2 R+3'boS%,H}:oky}hiEp8xTǣ ,VkMmwmΤ@2(;qpřVyr-g+@wr#Lahx|7`$Y3PdpLX)mȱı֒5 XA9%$5%ŴŚS}[ 1*2!1伖rw(nŷ緼;\Chx.2ߓ+uΉ?)Pˤ#0M-,TӼihs,gUk˭dse^kJQ*;  *tqԂ鷺H_r ]>.Zy[wXl\o@o)S?at&d-S GK3U+DU2aMaT \?_*c$s"VY]y)yQɠbvek<-} lsA2'U\WxmTX6b$86Iqs ހzi|I1:Yn7]4΢91%y ".LQ]6ϩûi&};5mDk A:$l[]>l-y#vۖ[Yhke$Ehޠ:GfLV<=s|63>HM0.%.RYj5%i$N4zFKG8M{h ΁%H?R3RdS9EEv9i2[P`2^x.iE+d"MbCBm unucA3sڂ D@rF޵e/v~wxظbC>\4pH{uAT_zp(Z#k Xg5=UUտNx dX̢{[T-W5c2%AIP*@XGI/5M7IPǙT)4PDs^:$c3C֞u܏q&k+p#gB&X--DH-r e$Qg8Ӓa;聋LtŽhya@ʍG\jvI<&MDJhlЉA pF}cF{6H->B2$c|OBq 3')săl7N.F[ QBcH9> D)ĥ @%h;5rT@x"5"JPJ% S^bD 2YɌ"LHHTCm G I)@>B*'%^B/rZә%ΕEV NBi`VFܱ>=R:z[xy_/e~.V<|bޤz$J)y&Z"jR^(Z |9MRh :*(Tt)}KIAEyoNb_W}]wm_w;о^ë!ҒroCZr+1pm$D"U 1ɤC5ɌL0j(^0-d1q LKnkx|{1.tI;x >AؔTЎKeʃp]R ,Iks>EFD>@D{6i,I!ApX[Qjo@jFocT=Fɨw>b$;D'@e`*6Eӷ E~2I ս >p)=e:m (Sv)&.8R՚O'ʟ@ hcbZbDzdfy܁M5zxo.4x擼xo8#놅[?IR헵}Q;?Ye8;,_nqYLgQ5W'8@wa:DEBW`K֣•uEd XXz05, #Y'٬>|(@{^aR 2}{+Pr x9"As I)c4$hL xBe'yW\iӾXm Fvu#,@OTH $z'5!9VǝeYM g51f}/^o7s9ApL *H`t.i(Qq%S Kq^(s(έfa 虨z-n<Ӣu1owcu~m}lzaM^>b>ܬZpG?ͦu~KAHHy$!7i<1\g(!Ƶ~̳Whxj=`i~8F|ȦQUiԒQ2>O|zVb6-~St'w`4]Py7Wgg޼? ^p4\Vp~}!/jg=_[iQaP͋/]üYV u(ay \(d٠Tr_jzQj^:=M/c8pKVȏe/+`]/OnH|vWZ'Ǽ)y%¨h$!WF J^eLѤ`~PWZWW G1X/TΜE 2>;`ZSWJJ T0" oH$[~5f i(.wsdBK_z6VlEn੕ʊ4mRoA#-2(I4`I/NSheiG'ӋЫFkoNhEd%T( Z'Em cPted^úM! Nm $=蠽#AzoM;\i m/2~a/ %{ %xhsj?7j'UDF q5ZK˽PYzAQ)Dj=dYZ+j Qi#©(b~;C9\ՙWN_󧢚‘f_/7C ;Ə?V5 dHszӢFljuxTxΌ_*畂Jȧ'/)u~ & Dy">mB{Rzjj밨y 7Y9y]h<k>Nlj xS[; آ~vtθa6p{:.KG}闯N*bwVKB}a3k? Ai 3ڬtu/EyGHohECm*r;V>-^ocP{m6i)ūPk˙sUp%p&=?ESKy,8kcp1w<;`Log\SיNX r-}yxxք.1M[]8_ )kJ GӜPnqTW.a0yr:0MLLfŁIX![(Z&wl6f=.'O mٿ^4<0I~4 ZW ۄ݄wc^N9@Uw]顽2u_ XwżnXch#Gj,fnrgި."Ӡs)货\RQ3@K*9h*5}ĥy1B-m Os.Q'Pw6AFpB x|DYˠ<^鵢2t^v*׸53zY|lv^Wu83Zѷ}7gF?Rf433/eYR+_Jy>M-Jsme"BYtJH2jNY*!Uzk&ygo'k??p"EBF%WlAEM*ϼER0.̶jEkP `ZhK3&(Ƚ@B#ѮB[:p-[wB{Na-l_9?z-p6?]n#GBtB؞ta\~`ZX"$%[6sorQ$J$3YFɥȈۉsolܚf(4;C:::DSGrdN$VI`^'ӫ$~ mJiD66}iMl5 YzsRڐh.)5)pZ)C!{ELЧsRd*ɭ5QbH^?e&m> p{HhyL\vSN:  \s|#dS//ɯ )f`=Ϧl} !:o<&dǐo8_&o߶th*He56zwg$0/"b+L)#ZŻ~1Ԣ/"E +l߀_ۢ+ˊ7 uBtut%,BDWبb ZY ]ZEE Q6{:$-"Buh:]!J{:BRVc +Gb rVw Ҕd `1%B=x aDc+K`%+1)Dt(%(XӘhkh.\R}_5lrpucFhc}X|H/ ~ty Bi^@,eJ99Sj_}?ߘm-)!d:fnH18j6܍#yC!- '/Ɍi̖nf8} cX*k5n6s+ᬪ\ _daad//`YvuuIt I}lmsƪnsYƋj5NkyZަ3( 3)JH*+2 D%N١OO-;uNXgtپUO`GBķ ZM:]!ʮ9MqXI +Y ]!\-K+Dk:5=]!]qAev,T1tpe1th_{bt(Mo #] !)c "\AJ+D+e Q*(} ZvlQWU ]!Zyu(9J 9/µPuBWWHWZ2LAtqa@9v+QWVuDZtute@Y\X b|Wu~!Bc++%,G]!\J+DM Q~!QҕeKJ&r@Ԑ!ӧqoǶ6a{NmW~I9IxQkvhnp}nh5Jձ#ycߪJ0 +%+t(;v!xOWoCWLialAt%B=rBWWHW\HAtqn78ZKNW5Jҕ\kQ] -(պ‹E:]JMOWGHWRkbuAtWuPKwC{cJ,*G]!C Zyc+m8%Xr|W;QvCX Qv*ބD *bf-#t(=]!]YK$+1Uc])]1MlB!c\JNG-qˆ4vʘd햙W=.[n kx5Kve8䵯_{K=}v: 5v3.w,=U\u5}v8u{lߪV5| r zNh5<]!J1VYR]`H1tp )!t(vh\OWoBWpeAt`=uRպt(5tuDtŸlXLBh<8r6KxY>'Nn ]MKϋU ?G"W&P x峊U&LYWʱGN!q/u> <ŃǷ4/]{y E,pb,DMƊL6rx4 1$mr0LdŠ6{ j] YN|m6uCI+w'ľAi;<~qٽ3؀? #OuϗFvZ&|ۜh7n=wy}q>}pcu< ѧD^,W|<Ǐ@s;~r=/#e)xЀ2HgR0 JVց|?1o0p}{.?gQi:OOb&#+wHߍ!Bw1=MA7K`$t}LSG3p\GP)U&Ԩ'azeJ|]7SP}{S,7O#q|ƣ9 fO{Jh)sD:Aqʼ9._u)*In)vfI5ZdBBL"8E) F9Ȟ)S)βvL#u(?=~M8P?w8&|4[4gP{j(ݥVz={?6?ho&ח1fӺAA9Ku.^T)Pm+>1|6:jM x_n,зu[!V0:=w5 j~]ޫ/Tc zQ$.zkItJ LsV3H"d' UZh*˜;o+'_w͞},en*\xxoyd|Ϗ'u|Ӷ'O[Hdx9t ~0 epT]\mH>}45(,N8c,(":ϯf@`SѡՂWtdL_Q2ÈflbF w8M;l>b-|9[)M ]aS0kc>e(:[b*&>P=d>F+8Y-qs6&por0xqP,7q؀8 ҧXwMMf# t1U/,dџ@}$ZGuYXjP';zܨ+ߖŁRMQFոj.h#\)y|C z~/y3iH7GR)(ͧW& _ApS4 nC;w2[ -"\Wngtݴq>VÏG}[FKtnkOBM']uz:jwyq52k쯂>݀70o(_Zs[waŻS)euRPgt7˸Ηc s.N ׁvMa7 h)oliX ^>ro ҳ[p BHUT C\e2eJ)p={ڻ_-;Tr Yb+-54{ILbEVH˔Ґޘ]N3tD}2@M*Sϸ(bNYH疝fyiM/j%o6_?-V#ڊӅ $QLkWLmCe}n^FJ-k_ "MegWM#/ĺsKӲ[Ԃ?.c_SqmlD-fg/1Mǝï )mAgb 8:XƓ2@NPkU.*-փa"%9d 4@qzb,.LіugkpםzcCu5vHr3D2 0\O&vǓxoQoWR(n,Fa:+ɭWQy&)hRz&YZ۵шK5u)QHBC 7 KIA-Nhg˅ %HSS5ӄK7*Ex4Yc9t&噧zKpA:QELO 7Z(VSSeN23-y?kWݱDWf-r3쵙̼hm)ct'9Q(J;;d{Ol;5v`G aLYb!sGP\(efDoD"CB,-o7\\ ^p Zg, y&I;8' yaSxGx}!%5#m>/qIK {@-)x_%Y%8 RT` s2Z&lkB\hhYz4}Db?{Q(cVO*p#*"qt5(ჱFХQ 6(b =`!{LX&zݒ?>_'<--Bm}U|.y]CbN:o$v7=ɳ?7~lt{؁2Kyr'mbі$SE[VZ% AIDb2͈1Pt y@ L"&N>rTB{,:#5( Dlb1;7_&TM0=א$֞XaTˣ4c|{-$DZj :|Ln$褴&ZG-Ceס=ث^s5-<Tf2/5)8|=u o_ӺL-v[Ze|WEH!dJ< hmy蓺P9!DKSok֎SP&T[Ԑ4I2%"B&zaW p/ &$UIנBAǜ2h=bӈ-E'NiNW:plC)Cp 9OH?? EQ%[eS@?ZXf!2&+ oFɜ?(fTg>'(:pyJ3ph.Ccv}-AoGyak>Tu篟9e&P .a 5쵰h%Gٲlݜ:;tP#.མ*+cYBph^G8GT||9"UV tk]HDqR&?f4ARhl/Ua'[Ӗm eGu#BRKMHqywVS+cB0w1)o?~^=&df ; N$ckt>-P2kSG1ㄒ&^~ny虨<[@p ᬑB[ 9ϊ(=@HI$%KFGc\R=ܢ`RI!L3CX\y7eu4t;=;{NI9SBSOtic(b&0,GINs7g_йrOcLιU>N\4vm&Nyg3G9x#NAG Ǐ 儝 %:Er8qQ/'1سoHt*!ѻHBDv( _p޺pQW //8Ru>.q8Cwu0cڈ-z:;;_OߪeQ}dةv9[3'R2M-W|ě{ }YiΤy u'܎o]fk\~u/&s.G:AqMo>AԓnI=]7uÚEaJbP'w f)G7Q@! Q'AK4M&ga6!܈%T T-7~-ypjbyހH3>yP_)]޳+iG}$c"ױD;9qFi4D4rzNq98|бfg6JHZޡ8PѫTD2XPDK$%DÜgB\P9TYRMOE='L?z]v8m<^_5ou_VX؄.VE+Y^Z+(]12ѤH"Y5 ߎap T$9?WQn ?%iM%" T> F[.v >r4GFjgն[l̔aDi5?!B1MDa88 Έ`¿E.㊒D$xVNH2gp3 i4=,QDF}BXQh^87< >qCY8a'Jtb-)v @/ʑ z`8N_rzN {/#.hen\@d\% F bjeq(}ТAgw%CC(PTڳĆy1Bx 8 J&I/6A}zU{!tӢ&mBLm?[:! 8v\@I&e無3|)B]-/]?`Ǻ_ۛ.,.?L3/Eܹl ӛr9;?]傠Na^ 0]yyܼ8mV o[hliC.Iw:ͩw++ xKxDۖwdhJwބ-WNL1Iʘ ձ`'a&>vOXs˱:j>\U_4?y|xє]kiq_pBO_4"T-ݗї.a0ImcʷabF: !e&+ 1|^y.\As e]/c4 DFZWe tYUq1سxD;q۪#tse􊣔+ƓxęONWߨ5㉅Jq*P*͵QTYgL%U^ )S֖J)޹uhb}:zO[#M-d.;b g mT NkD\3[E9kb c:hG[{ Z`%*_+2b\Ht~t9\d݄T 5bűEG/V =~J)|@)B%V^z[%&EIO@IBs"]fl"H18A/{KI)\TBKk"p I! Ƚ "D锨#,Yk4O=6r9u19as*ާ,Q*;cl̜vbN:t\>j Oo]!>G3R|(qOi9pr/f(ּ&=AF}I/mfZLmQBl"*[ t.yl-{g'pmTKD/ ຠ`UŌpWyV?mqS___٢_QBCBXMc1;ʻ`qrI3 $DBeq.pb IJnUI$%M'|ɢ`?Lw 7dg[l=,;2I1֣YuxX"b{C jܽu)Clnyna+|~G ^7OXeN}>fy%v7m)y놙Yi;?.t> sIO<ƳX㊕UƣJ,scPIXQN3x߾~8k%lE㬄\4-ӥ^b-}]=gyŜ%gϨgٜ|с>D^%|_;ImyHsI4ذ(䷼١/Ïgr/X `IG ~zz0[ 8ezk핹,7qet>,8].sr񷣿6z)~1sq/Y4?t 5Y:@uIDsgTg!2fp!-d"Y3_",-E2(4WE_K ho1IkyiTQzbR(ƢІcv&%VY Cܦ=pcp[쉋b&s#w;+0^"9|z+dj^F ?#wS H3sB_Ǡ3̹ql&ӌ $!Oj5E@!~Wx #IGo P{x-s~gL&On\=6+ZUj~iU 'U:(]۳~'8'õygAEk?CI'hbJ$ p\[0̓#'eE hM1{?h%=)`@eb^D" Τ 9w#c=R yƦX+cjX˚ŒהLu$ottd2;g86V( /,ON%ɪBdFsW@= ) lJm!aɄ m`XC,d9w# Ԯڦvcj6 ,H2%1#X@#d=2wA>q!RA&CX Z$Aq$Ǭ"]b2vt5s^/Z`M_?6ED!⭏% פM!21($!e)l\ #QkmU"gL mHn6:'\pGN0R҂Q̹?H4Ԟpq׳Okռd#\TnjUEpmG*) ϐIgjOT)fsL(V$Q ռcS<5py-@Mf9x ~4>ZeEOïBcЖx$NreFmmk/GԶj*o]=WYJk%/[ry[9~_vWzkp֥3۷̅e}e~P58Z^vZl*pDI7 >EAGoXÏoFw='ީd}mIHZ1<`ߴ͡7-֢J+~'r+< *+v0pUPXlXiyWVJˡ_Rj+wǃVd$gWHp6`Tca~z6FD=bJfe|{^7ITƌiw  ƗP< 3j< ^4Ľ'I\3RMh\CҠ @7_\,AÙK0<L>_eu<0~7l~呍F}EH;ktQ86w^57;39E-xw1dav8WH,:V+13љAl]D^ȔR,F(ZO;@ՎrT.#wL?JU >bU2wK{:w+y[|ssַi}[ַmn}[ַmn}[ַemnmn}[ַmn}[F1QLQ@kXgkFjkFjkjkFjkFjC5BmP[#ԆjkF [#5BmP[#5B=GcpFjk[#5BmP[#5BmP[#u% Z&uu eٹiq]ұt_ҸӡPADC%X Dفc΋ei`Al|#9?oQ2ΐ2eۆi!P-R1iL ,Knv~}k/ &gӻ&t?Wc`ki '@7O(΁ԧw2G4_9͊I?1NSK4ȨK;oI)ȼms]XHVX;'Q: CÜR2ӟ֣M<(" "/1LNs'DP#C I!st<413ȣ=٪C~$xg}^A6{$+Xh tݶ<v0'r0WB^w3_VHL{p/DZ@(wD_p|y4[m+/yy&g'thQKηnѫǷ8=\=ׯg-Z8+Je4-Le[>8'+,b_ٯ_͗@Dx1^]^[tA>lӘ-qhwy]ǃ7C_S`3or/X `V3='3=/`q| J :\?y|@8BvixtA<'Ѓ[QO!'BRc7'JNt|v+\k*!guǾi[Ҧwv@(.ߧiGFvͬR՝u1wYr 4ciåUƲdY]#] t|`kumLkGLح1+ƣ|I!ArRNs& l=lTw. J9HQa<2axN 07O\,QSCV] dC`Rf"L6#B+'\fDnD{S\BSJbDgpaUhbɢXED__(0Cf$W 1 )J! kX!Ayy0:24r[Ϋ j. ]Q,b݁kVx|R;@{xl\ 1Kcq&*l[fSݚ]^?x|[sml]`sj{Nm䣡>eز ~ݻ6z^5~:/`Zn_Wɖݞ z]<r,Wf+woSSo\./_p56Zn~<|=?_ejyӟI"̬euV:^·hrowmg>pO+ձ@lB@7#9MwMu(@w(@w-rFME$Q 8$E+0:6QJ}8r=,3x*Q:ea PVXe$#1D|֚8;AVvKJz?}~9]z;X WPpH_.='Lտx6`Soxx*RuuCQ*#;&hW1$uTPxSׅK$`p.Ywx;^1b# fPFAF F^P"*Kf!^@Z(^0-d1q L}t[ ^ޤkWr?e^˭s|'hKY;-B+aKP.?DXbqSgT! _Ls;鞰<(#  8nRL &əġSɓU]L,i_Nyo7׸t9?Lla`q>S`jK#R(_wJ+trf?*aQ1БߩL՛}xuQ٘Ч[TTXQooׁa4z)̥m8NE5ܬvpd0%[N(4GrHMðaif}|@ q<)SNMs'bg1c>!FmzVJ.GБ<ɕ]_`l#:TyStpMl8~=.?9:{xsz݇3ޞ}xw? ̣υ3DOO@=E{C CKk@V9?xۛqpVq-Ȫc|2U0]Q +(9ˊd8 Ѵ YXLû[.ejM[xqrw\r9 v/ fK;Ҍʋ x;@"7kXʝ,ί]!mmMtxt|Ρ<`U2a;A_ib,iXNl'6 M0ʐ+O#~m!0_Hx )tlu/ :Wց>NМ\*'?V".)$&DÜc CEfrBP;D_)uŞ)  l|nr Ug7~TBFM0*ItɾH!y/4چ ,c(؎&V !z7 /`J%g*sxT"JII FH%QsVr.l~!h`w{+mxBrDm=kː~fCbX[HGuW$/ܥ')mBcx&WЛF[/hKhP N3#΁D'E1g1(щ*FyݟM=q–k}Jh.\?)eESkӸAsw$H{QRYoC2zHG={XiB0YF_LB_hwVa.j{AV\BDPTjy 77ď$Ed[Hg:i2.CWf'd"(ƏJS8 L9lISX'z[=X$E;oĂ*MqX9 ?~>(O6H;qsY}TT!GB iQY +}ʛԧkh=ܵH+԰YKwz-L?i6J^8MFCos ~p= e a\/8`2A W9EGm??MޫHտ_K>B/lG.s9q8/Ugt̨UޛqLsϥHLӞrXd8ZDV͇AQӟrp>X"Z&k+0<. w:J缳  m'G-N8DY Ay"["#kyѩ=K#QA> E\y[pΓB/T!gFߺj! Z Oњ l(chItiϵQ>޵aC.Tϵ4G I2%"1SVI@d^s".RD;ѧB胲R8hc1ip*ͩ?޶]-qنvoD#/Tto*ʌeʭMq(:C1}8ʝE&DCHF}:r5zmMs;_ےzv]%.wrz!nY9ܶ#d;p)=e:m (gΜd &$TZZ~>OSW|hj 6&%K4)%FGѩR#]ihDJBHNJ 3&LflC&eVHa6W%g-l/#ǒuEZ["*mY;ed9pF ]@$hKс]^szhO=`MWv}׳6)&(/I9d\L ʡ`ۈLIQd6@*'pҙ"1ɜ}} 7-hlk,=8{ڧF-L>ҡηrK@\`V <a ܙ BDNJxaUЈ/[vm 7eE\e pqoh"slÜbI.m~wxe7E7ǥG'G7=\nױEs}= EgL97fZfU$K"@v Hv,|%CvW;zc`l,:k}Z`<~6sy/_RPPo_e{{,R/vj~wU׿XJ̫¯Z"?_$ Ij)%%Zf`^=AI?BOz_wGFZ4S*;?1kSjp(jxf]kSr1ùF5 %.*Yz3*\zU%S̫ͦxǙ_ )=O}[I]7~GT_WE9d`g;UumɤWGϤ؆v|.Gv;S>w+s_z\Ts_aȁ!vڝ֛NWek+eխOW*Ƹ3t5 ] _:] ^tJ]u`˝]| fp-2Į¾.pQq-ÓًބoȒ z,ślgDU/Iͧ; .L<T6yON/-_чoLarЪҋ'e0 .Uxm*6ttk6֥KT*WOHg7g {fs0lל= j7gCe3y99{#:X:wZNW=]Bew؋ ] ag hyt5PR++DW ] wZg_:] lt 銽Ugw;cxgj%h݋WW垮^%]i igjQWm/J/F>xKt5ݹ38* ]m0j$++L&a wpsՀvWjJt *;EWwܰ3;NW5ҕ/v|qp,@K}ϙD$,M>nχ`!2gˋߴK:}GQfL]j^my|}0w= s/8?m0ϵ0/m?@cx(jvvB$Z2@t*Zv*Z+u;?05Wy##%ʵ_U,Q:޳)(ףb0}*P-`{W2uv6LzקէUqHV)7(whY:^_m4S?**o3M)`ٰr %}H=Uc\/.&ccW={ϛC|&B{~>:Gr?8k]uޮImr'*(u_[9755D9,8&QkhoQ׏59e؜ )LycJH9W[ፕS}*!ƎTgXF owBwKZ7acn` l&S$jAC$ZHFs 6#ikr- %Fͨ:^)%C~-6:U r $T,.Rj·.KdViĩU1'ZjG,zd0vk=!1f(:&.\LKNP{;"̬lN4j@@;ʕA҃ >qcDk":Mr*0̿O|4 4fT{!5^[[!  wGtKpJ& "-SYkFg$>Z|s;D > pH{?oNB1 rgok* B9ӐJJ6i4{\W*1 â$7%B$JqM0Ǒ֑-~B_[Ъ1 \jLd%#Bpk5@T5'b>'7k^,Ԑ"![Ӎo&$.zꩻPMɁ1s#OutAڳd[h,x!Ly[0ɚ/3^ J(s向[ `jAQ ipWλ68XV¬aD9qnU⳯+qA[<;*!kBhcnuԗ<q%YgeеЄia2d+3\"7ˮp! s' HJƬG6lyC*D@%JٻV8V*SO)^0'؎~okҥ:"RX dl5&Qq Va\tNso&Xr2ykZ a@L!ՠb+"1l a̷A04'X!aAYLhɣ@v&߹{f)1n3|k6 A'b΂G ݄c6Ks ` 7HppR 3Շh CAΛ:2NBaVZ h\cMEwf1)ti7 x֞e@ފ(EzeH_u4U q4X'+w3Rq&NR8FgU]%ѧVOc``3/Ռ˙LJDAB9;'Y,W@H<|{0HVmla[ IB>3q`X\۠Ō4X1HNo8*\vND0BSI`ۀLbY//*ׂY]ʦmzTfx:p4Ptd): J@hWe1'z$;JYBgą {jN$\`2*#ڂYl5t: h^= %&0A)Ms֕H#3xhՅY,TGַh"V14Ws\̌CR[Z`ykѲ}Z`_nӞkqG`MX<4z24 CqۖߝLɂ4QG[S^kJs)ޮ bd( =1=oUfhOV+űI y wH'5iCÜ駭KA4v+Kk,ad);,t׿١ٯnoA-]\}g\}lcS|z˶v_c@>jrcek|}z[o?cŏ'kǸ8-4bqזߝXGQmס_~~|w.rz~ozwq[*:d7]ovv?op$F!:B6Onm\b*3֧3W Ww&mY*D`n@u2L0v0@ƆQ]5Ejx(V^e(QVwDuUf[!  ;\\%\Z#\J:FJpi W\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEգWn\ gpr pjm=p*-#+¡7\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW \)'pyw+kF\pu0&+"+"+"+"+"+"+"+"+"+"+"+zT`8a+Lw+SWF-c7pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\pEW\`pJ0Xc\7CfV@ciz+Xx%>qew0kLg0PT0-wćIK`(!tgEuŮ@v*#:BR;dW(؋ȵ3;EݮPdWGhW3#YLwƮP]+T[oWrk+%7Cv+k;cWV vB]]Y-+lcW(Ǵ+6h+c+睱] `Tg5dLW Պ?D֮Ѯ@Mo[4X 6o{W0kU+ܠz/LeU!Epųt2_npVB=f\; q#ImO'feO. ~S+~*e6mɦ[Im ;B^wŮ@-gv*+6]+Hɟ'wƮPSpRq#+ɬCv rB+l\Ivuv7JvȮP+XW zvJsu{ ֬3v]+Tk[oW)c+iV]̇(k7 TO>6u|"q:MPQol1*_opaU)uoW:#_2ؽI1I#6CN.yqP"FK.. OECC |D$'mrt2isȜpIgyv ϿUjn0\!aB`Ajy)K`:$uCG!c '8q|q@hL<=W'G)o .PYgxeV&ϳQ^]==||.ttPRBZk['ŸjM6RYRQ9J抉PbėZWD@bᯋ z֖w*re!#^|$%':\S+ n` \Y`bN<<6*N/:VC:ā[8E ;Fz89!8jNVlFT^͉N}GMiq￿f&`iz/a Mb\L9 z0#i5קɇ^#dЁQ0|8<\ ӾtTI z?<:gЪ' oC0[f|4N"Or;zlAA-cKUe.h9lr:/P,#z@< DZMU8bxz'ȋsR1y*2Ne{ }z[P8PNx퓲%528KIK,F;[pVÕJMvB[C0aI* 7N٤,4N"L%yԡGsibbX@BIQv~,Z[p _!j|a*4VZu9[uTqP^a>/Ҥ(Z-K -qq畎YA50 qk l}cS et2QcX9Uy=8hbЅho cv)9kXaYTGQTNQETC3[|̈́h t%̘T.uvNu26ҁ}`tf\(Rr4+女Ns̃7 ,dlNƥ:8Ǻ:;1x [1gp\F{+սZ'h/mn(U}%oٮˈݗYnye++h)'?Ļd<\5g2' 嫮3못j^Y,4^XCdPYMJmOxs$U ?/}ǟ?{w\w/ŻW?<Op9BpK~?Hy.?/潙a hJUʷwYfWiCS/ʡm6$vݐNP!p7D:ue_l5dyuuW׍g'Gv/N Ma6>?>kuϑdE) >ZeaY{6VL']k'9.cu9k/6ͥ&ڧa08Sd>xe(\ =V.h[N^V' ayO2:7 (liv4cwK0gO~rd?]Rv>}.z֫2e8W[ف_R.F JLV0d\Uet ,5 j|oSC`#vXGԪqyiƕf_b8Ѫ5 z\,owii׍hn ֩{pS{X&" 3.F}JZ`crTR(r sXEH%s P%]T*]T.:KfވDNf3 Lut/NES+{Jna}>GC.F3_yf-75~fV{ru;V;H8 XSYW B[pDٳQ9Z Ha>sq8l y;qThUET^a#('ào'p\uvcJպ`1,O?o0uU*f%-NBXx:u_Z΃˾I˭JL'bv\j>YC~TiQNB,}?yB': ,g4.}lUqᓳuXؘ>;ԔlQ'˟Lɪ|!> lb/-z򲍙Vq ۞Nl{GqO])57m+ﵟ߫n׿S0|s(^6& ܀~94zy|XMUzJ74]\\\XQ*D8֩Ym\& ׂn˃ ;= PvIV筳w_]Nw\9:BՉ6)Κ@iZ{xߔ︘漫+d&</BeC7r+oFdL}35'9هw'IrnW! a$A{m NSSq1#]<,,CKgP2.]#r_PeK?$+D)bZEh(Nm .4ʅPYrj.F`F66FrCaߕμꈅ 9 |$JBN~)kw?(@aQ;J Q !Z$d*΀Zi~rg-]zgm;g-xJƶc[J9ؤ眰B >ٮ@fHJ8{勂$٪FP͞W cgv͜'Ԟ# o ng6zN|ۺ̏[nI9UvlЃaqbGĎ֌cG߾݁%Q-ض/ U¤)j1ĒBOyJ߬&q`#*ҍl~@{KIABsIRee D1ZX2޳35ohZqyҸrF79e&b9i{q|>-Vv˘N_CO~"dӫ_6<ېjm3S|8⾭pw1SVŎ_нs(/f\܁qZh) ކ:Ģ ufAKG G;Qײ?C*Ar:#סx:iи,.` qR =Z1gl<6,Qw?Z{BwW4StT/rW)VE,ƺTTL]RjKCض:v[TC޷`)r\KK$SդBgfn~g}~|uqvt9fotCL1[)|R;Ooh01 6a\Koef%#&L m~mdb6ay6%^ϛeC+]$,8q@ۃ`Tdׁ]fJvҺk?9D_]&Dg0/QD$bEEgQrhPABmmFЦܗOfWSѲ eC1EįK; >^w !/YO8%|>|c}V"d[1+xC::Y'WXXt4sO[Nh)eoIhUT٦[ &==J0h&vPYqd`bN!F/ *g*! ^nEm;nJօĿ" 3(:krQ9Y!QAR`..fn<uƣ{};#"O8!e VbS s.k#tHUɺP*y'SԲk ^Hl[+"zC _kꜨkÙcb,XppJIkSQ댈ٍO(8_sLnd/\JuE?ℋF|VjKQ22eTGBM$*MB \:Mx\<<6:C艇7f<<؄n8Ge8Q,wMՏTV~7kURdHJ4>x 8TgAeRCi*ת{,s{\>T [.'E}054[4ձi07Kwh ?^rq=hp4f5z5%62GЄK2} {Ǒ#A/?Od)yF庇0#x}8/,RcꢊJPYel)I&(7 eP=:wDB(ZJa SqX@LlZAfR5#s6}| gVM8BOK|roID{2 m$g嚐ݺۇka[Yn'3ߔ._d~5#oֻL|vNQw::Z.հ KXC1 h KtN[4$q<'5Pzw9ًwa&W@rk6,XpkS(גl\dU}*= RA%s/sgw~=oǗg-="&jTm5KΒi%ҦׇwW_lyϼ׽*CDBkXS9cSwsdpT B oolL Zl[C#0`*⽚V r:n#ITT 1א\X;G*X!TKs .DO;踬ۯƁ1̿cc-fe9DTD&xF E=UNB(JM#7UIGmb~k?tOyMl%$t+QчIWywtJ-F CQ<Q7ڑҀ;h˖" ' ʉ"F)2C..*hRP Jٱ6r̙9Qr9V-͜oucMMBw#IBni_@d $rAЯbԒ{/dȱƀ-3~ꪌ#h e=4Zk^xNsitrt(w{EBL![ %mR)B$"9 _pv e-K÷EU1zȃQ4x)G6f07 rst}ӫ GE"DW-7ZҴBً@,Nn]J{Ju,J07W?P\G?Ɵ'VM?ԚF7,qgI + k_3"*nΛ'`Ζ߳A^7ݭp40G~|?fv~& /jqu0FLo | ou廛=RjНLxf@8D z<%\k,@]ߓmDg™R*0N3Sa;KfaU]ݵWoFFvmćɡyu5 C˕ƃ1{/|v "fk {łi߲ԓ琭˽vg>g^TA$AqmDQ2HCL.d(+'݂@Q:&D XK(PI^‰STx>ʖZq 9ػz@آdtCa,_e֖^d{ ~ *糤Xߏ Ld&&|X%kGo@A:5K?hv'4/{sC~!t} ~:GOxkNkD]VJ7ۙh(x!BFGsre/8![ٕο!fCÇ.+TFTfzk tv_cTV /iS,>q@o9OV+̾lǍ*v|t{.*ڤfnNk_͚폮O}oi /o,bۘQkFlzi8Gl$ Gy drQ{n/|σvxJ0Ȼ٭Xӕ2e.RRVG9G}Y $O^=)hesɁ` y"ľ&\,Y<]X)%ɼ03Ϸ7/qS[K2odlWtCǣI`3ǣ/zYǣOxtѯx4hU["q%]*pWUj =\Bm F*.خj˹UCpE .;WUxz\Ui?uR•8!"-t"i?yRU•5FCpEBgJ'o V)MWf1HU+pUEspUԺWWܖWw&#oΞY%k: XOz?/Lw]=ۥۤ?&jʟ*})8JJkbwR\a7JPAQ&#g H)!]75eoϖY=c@\xc&ĂoۨV0'`>2X.٤}ҭ%CR=ʺ-U?%2u}\Ww`ZMٵ*?ZquiCܕ|\%ڵ =NΆ@{aJ&.(!955'($w*Z݃DJ AEަ6x=t\O\jLsmƴ%ecz c`5q-q~y=;alxF/CFKtIk >H h&[7cQ U"F•7 r9htҪH 68k\F[z!reLS W6 N` 6xB;W-Pf[J"-ٓRb$hR}35qkyW0k IJa[ZgW_~o}M1Ƿٮ-cY PntguGjDi%UJx]zf%D[47a:äD20 ͜ gA\2u g]G<,;|r6](H2D֓Ŭ & \p.9C[%f+h'w.'./m-_LnZ~kzo%-XZJP'ef?O3qg6vy:Xhdu'Յ9.,$΍E߳N<*˨R0h H*c_ 3D9TDΓF*495i VSB>G 8 YxJJw nu&Ύ[[!j]IC珿Rw%wBp,%!hFd1yS !^W|2zͅ@lufItf@q1c~GQK(HM#*kR 1Ɩ'jk<>Qq1Isqʿ~nTS=5[^NOUYS^3lQWX>\_yO=mtzt J*>x}ۣ~I'GTdzW‰[EcL|QQFm(Y&+(ڒk`J8_VgtVQ[SR*zZl*JHmhRqA&-I70ד m#ck_/b`xc'&qG~g }Lr.p!H4P X%CvX ιd}[VaƞO66QCАtBn]28]b.<VG[T^SCS]n-o(?{Gn} CK|ɢ sln(Fڙw@+<$XnI- ؖ{"YEX)fF2 ?#!,:F)EvTfotT?;Epv-].f#Mfg FɊ߫'|"djdr<.Zsn~UsC~lyy7'٬`K+mn$r^Kha;"GBZV* i][㽟6{[TX}Aa ZJDl} jGظsdTPIF4"0YR$#ILEa7B:(ڦQY檘*/}vN+heMP7Q*dH}GFF:g ;Xj[k9_&cӇ6n-ջzg"R G*U̐ 㙡m6*ލۋnېd|M[á+$7T~ ^SoQy$\-~/qd}f@QZԤLb QsKUK|Kd '`Nܕ2H~\#rAk#YmNd 0wĺ 3=ɫ6AgZ7eTlG<t̞N KE z K gS>tjeb8^. 4NA_YL',bF1 fMn< AETGmij%leSR)?lg30>hQ}-=|(r~ oj2S L.a%V37n}3B]閊LGrS*{U8ȆQ)i=1X]vZ`^fݕ4>yy.6"<ww|XN쨄۲l?_K"o$myƢ5硪577][},n)MjlEn_noٛ/_MZPt}zs1 _M4q~ܨ?Ys;SY`Ujջ1k壸rvH6V_U>a=-OS/eI("UcV3J3ߗ f (3ɍ(KE4Xi "LE7]/A,O %ɘI>I$oJ|z*}=EW}yHYcuz~Ws]fX.-0#Lڛausg} bFHx;a ө9VcjTJk}5ǘ=Y]nqAlFfq͚T>. LLMq |׺S'r:ckm.9AТD5`O29WφLcmK^iirI<R$tM>dG2e)J&BF&(QXJ]k/rȃ= *4;/YE;NoXS2Jp$E?kZ ja>I5 *JҒB2O^|[v}څP޷FD"1I,!I Bj HDB(gBVw1yelH2,,(m <0t3B'zcJ8*(el@2TFM_S(5Eڪ7ۋǹpJ{ۮrcwXw= h,ilHXMEk̀w`j@` F3~0)O q/VXGW%"+)C P[uhǴM7m|,!X^}*:H^Vljkgz.|"Nn"Lޅ :!$RYC!X,3qc#8{#sdo]XL|F1i"e-Xkܬ2V(gIN8 }m r1;#SMQ"7h u1FW1Xsbt0NX:X+Lq Uq?lXlIs> sj0Kp,6V8Ukb)a }V|i^Nr!EFZ/bHrkڅRdRLf"o#YUμCiLT}נlԪjM&{F΁fE 9_a06nSc>CNv)f\R,*>\)5L0B/Bf)2󧨹G$WP bEty,K)-ȃMVPX\92;Ôw~^;ש^Y֗-H]) r_$o:kICnBT5`ە01zO- pS,FY V"$@Gђ £AMNBlX**E,: )eD5x!R1 Di 7]1"vY!yG^|gg8E yz(~cYJQV>T=ZKs)K99F(II(g|cєQ"t!P.jcmQBCr&iVI*ZfիQvJxɌLV%rBy9/aMZ{9PZ+QĨx("2FS%d#3>IME) T&+[T2@"kF}>e۽)kcTDJ2fo&ja9S`bVoP+f4IO$Ѕ7R(PQ6-% u=8b „$:% r8z88JlKo<1ؖ/#x"RDc\ VVJ1 T()~eha.`^`b Ya2o f̓*J(sJe \ҺU 8m(A:z㐻JY:ԷhEkʬx*Ҵ쪦~-([䑜M=Jlݽkض4מPg 6hHI7Unq`tPF*82Fa vV\i_JqdItz"BB]JTptL3CٛebC %*9KhUeWX;Pb NTBp%̺=Kqrl_ͩR]w$WJ`}=՜$) nq{ \y\g[ӺؘUk9Uew,1| &D5Q(T%aMP8cš%^ŨyɅ+> 2F JPR kL@0t'\MB&Գ L "2UQMːY?sTL^nOs;N5ĐUm<$h5ygggi Z<[NSr=aZIvl?^Mކy|G54($^Lgym^O>Rm>Z$"P 2EHjV(,tg+Ohͳh`*G o%]lxަ Û9DU%_Jz78 $ Zчyh7vm^ ={vY|QJ"QAoq?\r[>>hqr דkcڼ=Y^Mkkka/=8 nGim:/雷rnG&6 fh''=vu񾳛6ݟx,Sl>e)=8%nu;Ȯ^j^52R;6>=[Œ&geRcy$'g`ӟQ74j'nl:_?[f?vt_{Y}_/_H'/ɫ$E/!~[ kv@Z|ytjj6(!<ۯ0{B$m'hCkaWD!w׏QMBP>ҍڋ ^nﲾ^|~$jΟ{ a}3"uw؜躟Qk"ztr={/|b#.\J`*h%ˇp~;wD^Q^ZAƁ<%Ojʧ&Yɂ$$YԄ 8 6s>8CEft ]6t(1L>}>5r:k~gq V>]jR&8 *ˆA=( ,cHMR7:D(/Y /#gsᝓR5?%)Ej\J!쩔"A/7!r߬H.wAz.F[[b}7mݪ}b! "VΡ[ !.㒒D"IpR%-O!WI=C)'Ix%Ix$ KNYKeAzsB%_ a !seEHZWDT.j[9  Gڅ!/ C:jt!NĻ_ao]B^zw\cFȪW߼_SVk9n__mzGi\5^r̝Ukd Ț: 3߯/6QY*Tv5m|:UYT=L7e Kr2Bkrթd jrBk*l/1[1Js3 hr!\1ๆyA1^~(?'wzփ4p=oDEYETsfDH9_7r5![M G72_=>ŦkdE hWo? q])M|I g-?a Q޽\,2C y՛|_^v7`_jJW*2{_whVD*ιД!Uٟ:ۇMz>iY O?~#0m}nqnۆ楱Pa}e*tưtQo-$ rbShǦ%ֆ]gܻv~oLK6A ÷Y(ךY +(yGy歅Ǒ܅HGbϳ8*8m-)F+J<{ݤǑkԩR+ =vB*\@b*zBp 嚝 \! ^B*1\@)N`CU&WS+N9spT:uBp?I)*s*p赫L%^"\ %L'W\s2 R>vT.4upJ2nt0_;cޥO2[͚u6(LެJT mSԠok||.Z(:}ݕLn\}?ųYDM}:Yo!پI`t)t9 lFIhwJ5.yv015q_}o/= T^!TէuS}.4ivz TsϳamҔgMwc;αÏ(I )ͺZ|*fygf+ܝ` 1eӿ怱IeqQW`0BJ&x#+5W9XEu.4{6N`ß,(NRH ;D2Ct߁XJ \2ltDm8scc IgT&H@xF<.FksNLIEx*9GcvK`rHkfk=YHr,_uˏP}PQ\dm[ 6c®u(ilBmeM$ wj5x\O'.xvP\՞JGwoz[=-5Di$$qgQ$HSJS DҨDr*T6e)dy2 'uIQDhk (`=Z_TWJ"Epd=z|碥h3IS0QhG,Dڢ *rSM0Frҝ/9_@ WpmpI?SS?@Dn2>T>:TLF9?8$Ę 8m#R}HRx%3%ڜz? sA]uIut %JYQ哈 L\TP ☍dqNqH;RHp @G[t\D142 p'LTTF(Ebh18o3hG Dy &g blyĺj NBʚ_Q[R.{=a" ^DRe$VJ}Pge=Q#((Y)DRL"39 G~Fv QD$ $px,xX;vCZ=as}Mn&^#71)nMf'YʅWqRĝ!qQXX9?ߣ8:Z[VD=jX+1#"mh ;n;~`@׶d3WlXr$Y۶t'bW7XOe.g7NɋwbET1zp:Ks?Z䶥/,FS8-cV\]n .$XJfDРY'/%q#`|y'y|Z2K% & 0ց>Nf>ϛYRcM"BCcMޤFx]4OqQ/ @^~Ͷet\kQZ2!A LP8$,Rj'}J %`4Rdt_.2! 3JQ@Xu&Y`#]늜=/o&5zty?8:%Z^d%Rrۊ7Lۊ/~u;eS]{EHEtiP(DSɅ(3GB5IDQalۈQR֖^F"@m<,٤X{ZbU6xmb.sr``jz;&3,8W 1Iu1&[af5ze\]ݹ61!@[[%0-\mQmŚ` s1#K:G!$aA>¹EʢmD@lTeP9Vķ.,6:d-e㬴QKZY2 N(Hr Sˑdcx~9iT.yH.b0K0)+ rY@J<JA'[װ ޴mG$uu &뎓\ͯAwk3T.L{! |玹Ţy|?W|oxE3r22dLח7]@oeN)Tc&xk o;}=C=UуMfDCl3'd;䵼ă)Fͦ$ &E"o/ @d,0$=CE\)PŚ—oRKܶȣKl7DQq SW'Z4p2_rwmK?[/O-M}|D 'RL~8]| Ӗ f DmPk;Ui^Ry!P/X.Ӵ'qNPDN5!bi!㨯Gmd^/zգ{QnA_//ǯ t*iҒXdHkmQN7~|klQF݀pIZI<l=l3wȍY'#] 9IhJdS B"ihPFhDh#h"Pb5L zjlT7IZC*Șn98Wxspy֏1_h'd}b9Ϛn.2Qg E!HH ڏ1dȐ"PI0Cj0N Ъ o';Go$pxiڐ-Vo@7k(vͼـo{e_7ԗ ZV#Ggʣ1ع7e7oRaꐷo~On sOt5~ں''ŏ͘G~=ڠڅQT\9zA{a+g[#gi֞ys63nnOە?{b (K|m·Tf|g/yZ~R["kvovu6aQn0l~ǶZ\1w7d L݌c}Amoc.s G7%Ր}y<߿K{w̼l^-q1̍Y78l YT%҇zKw=tEsD銦tJaQBDSH yL6 D K-9' ujڒ- $:a)i d-uaJM:3r<+W|c ml{ф e^ևqF<5Tc|bn b~nA;>R;'|A*Z Z{Zd/h^9BZR4f+P #6+ 򼙮v:;aXwA̪KļCT TK>_s䖲-kϩktKN&#6c {: '`8}ģ>F_prϤ1=il' \4]o?+#wx41dgއN&x^}iZyyK\ EJ!`B*`#$I29d \Ii1$4XLY&` tta+r4ۗ(dΟ6ЧW <`8m "熩0aj0톩fֲ bzBx ǫ>e׼Ts}x;gLn.:9;'MvDɾI+H{ D-n1YO][w}< 3,Q׼ɱZi%@a^M#N=y#oD]⿭&2ۮ/dz#eve.`WILFklFlbR5Kx+|=$v`=7'RN- *gM!l! %Ar$z[x9; Z6JFgYI]Jr@ZSHLO"(#ɀϚdZɱ]s?{1םgr^-q7+Q'JsUFQ :B,:p"x!|Hg̞wIAKngx|})+S*0 8I[.Jh͚MR:(J^"I2#Ss]PV}LKV2҇DX}13rV~ Ĭe]ʗȵVU2'X"ZPX< I&#,R(-*WTҋ@"kF>EaFmmʓ9+epC+BbB:CΠV^vRo'_+ R@HEEԖ$UqF%+ 3Kq8:8&4>v_ızGl1Rj8S>R\CN+S PTFXr6҅ޗ١GiSSσy'"|ȡ';E>d %aclQW>J6֥rdkxt'K0?Cձto ZKSyHY{:[ QCQ } }"ܫt׎P#? 4n (lƁ-"tFӦ #J}ʖa1`_2Ƭ0,q:(%K.΂O.. " 0\~wcro {t˲7Ҹa G Hc-z!֯Rwx5&2b ;jk9 t0+Hyou|;MŃ4lC06lDPdA`V  2TM ! %//{a]ֽ"tlUe}&QfD;)# X2#lJ4eEy{n.մWhT /%$BhȐD%dH.L {_zRd^:*K4kpwaH)=XQ˙RH(/%+q!J^`i+:AԉQRA9ęC np}Ljs" ,BR ur-N޳Q'q2td2 .C3hJI,H"8xYxjZ_4_*pZɗҽ[?""Ow>c#e{m>_ ݖΙ蠂'yQ|_>X]1zzr ) ֞i>z0^]kk~*0@/<)G*ו#;gם8D!qM RP#CRȢl* Fwcӳjnۖs$Go.#`C'{:twS9vfw Ώ b0x4Lty|0]28G^Av5V!%g172;>?'66}Ylxg*?o6FzN^x2|䟯}uLяG'^=o0So]$˝w!FqKǞ,iU)=\~~>r/_䆏+(*7藇"sWrꞛAٺ Yh&*ZhZl_OjY%Q8@3K%[}>MlM;\HX 2zQ!"AVq (ηxFΆ\0da< BXhT}A՘K0#YQohh;Lg^{o$uG37@څS1RDJABpf1csV[!zTbs}f\ҏ%hlghnqk ;vS;xl2 &DΡ5Ŝ~gzu ON+\!0nv'3N!7<{ . ![BUxu #NR `a#ǣzg07gXl!$[%rޤ~+˶F蓰^W-:dv]!*CRԷe7(V/=K}xxq僲`SED1ĉT7AB`.bI),P9S;=oŠ{-٢xpAZJneil$riS\G@^3 }\`ƣCq\<lsQ̫|M+%__Yy#\Sʿ}?_he:5@f:'d*gY6dNw& Awʌ>/*^,:?u\Qꊨjq+ yzvyڵJkhEU8p񻗣0) i3〓g^1`>=e^&~yeFHkjFS.'1\lE٠ HIOS_OkGY+dʆzg&x oMLLSX9_\D&u/ļ#7 ڮN|t7ˣ\z79ۿ7k!wg( qܹ81󰂂{4*lXҙr3gq|Y'=fqY`zvYr0hy:nŇ;c\踟4U ԹA0v:3ty8ާQ l$8{ʴE6Ë3} ?+PO0gp~q?6=1:V#""-?k#:06&tjYg}C@"=@e=t6f M;y=ٷw,;ʥPTȽx,TV&ĆQlЕp4S`=l2'.5iN ؒ*꧲XˋZu*fE@jݣc\Œ[%(ƍ}Ez1ZRA JXĸ>qGR? qo;$GCL(߸*`Yߎ+2wKi_21%gLB/lcja敭krE߹6 p1|UN{&Ɏlr1A[^V4n1btY*S:4_t_S~}-l$wݾf ӃO.au\";ZY \cb]\ 1P1P8:$fۼr<SCZlwnlp@ [Oc\Lh`b.y*J!^@/r V76,`Ħ+=[U[\ނ- Xf]y*oA$|Vޢ_|{'ez9w#~Q*vr/6He&%?H3zI KQҖ}Yg{UEu U> $$g$~zwg5ɳiyڜk³&] `ٸx+}YR4.{74 <ͪSqr2*+=SqY] ޝ8gٗ*d28ekus5fisU*{期NCy43rmCT4`VRnR?=-c΃L9 )j$ SEmc*gE3]W9JQ̏)̘|~go$'mZQ;=Wx-L)]:,VźPM+j#ss?:M:Ȍk9XXeWO[[['wb=D<"}$XGO⷏*~H;w+oal<\g}#5+30;jsaՂ,vJGOI zq111(cO(VH"ͮ=7n>)aV oG. ӷVQv4LLor\\GW@PZ)PG"b\GW@n[rjW@nL(I\1U$WZ"-bLK,\EにH" m;\E*5wWMv Q;>?R" E go/cUt)!&k?\x,\Ѐx'y aw P!}/hǥ LvUW83zagyX*OwΞ*ޖmłiv-;dlՠh=M`A9~1t$6^5Zno`|]@0hc\6t۵1bƾGmL`8>M55SA]I&bzٳB6K]Nn8)(#K\sFy+[0ؕAVH8Y]CwwsI+ @s]5etg{w-| Mjή&$A-" '}#Y+<TdFr{sӅx)k<!C*:.9vD6?l\Y]<: )1se&ov>.:]lz[[=VϛU6ϊNvNO̹M`Y+(;=&>iA9",4\-Ěh'9E3 2rV)bׄw1= %? &c0o>.h[NƫԁU. 5ew%g$gpB0)`-v!Lp0&> Vnib[+NG WuX%%Z:J3EiuQLmߘl9i&4fNoZ>½o}ciR=ЫuBoE_8`Vă*\%256rP"f9O2a[_i?˼k*b3:MᖍA[VDmm "!/w1u L7u:IoT2嵊fL8\=h,u6zbF=g`a5.%X y>7s(V5=v1=m>Ztsc/77YC\4h:8I-6P J !t.㉴JO(F]$7ߞ6 "P*DW̄ ,(Z3*b ʬ䖂"Hؑ^̠`$+& 0 h- {`M{YSΒ\7 fMF_Cd-0'`?+el,qTDBŸ=NY8!sGD N @N?0:ʹDzdN91ւIe<|Pp .NMղNI3{ H )D )޵q$20aGÀqpd\^XpO0Er9lADc$ivWtUU]>KqI u8 NIClOXn!= ߏ9Zb-T85APB5ZȂ!ŀ"&:ǰн-CжuË.N:"2J2QGSu!=()$X"h Q#]$gc r7ޜ wtC1w(h`WJ(uMI^7-si؍OE{v2>mu]ZPWU<M(Ҋul8%0#rșs4W\rɄ9z; 3TPp@AkLF獣RPcrTy$GW Q2h1*Q Oc*Rv0,<OJ^jFuJ"~T.MOĊڋŠ嵬hpO'Cf +PA a]RD1ug(•ݜsWgaY[  ]85i!Vr),7nCw^1=O;(X_O :5n)H٢|mU$^WG9D`|T!NMܾYΉK8>Y *cU|ͩnU͓i Y~-^/ެ-H䒘+i8eT߮q$O?/e)7; W dHom>֭ìnw>'|Ty)>n.n^ ;&GP=j3ɶQ*$HExIs>?LfeERR$6 gie);:'Qu6W37^v |߿;D Ln#?Ex w[qK k@V)}zn4Q>YoEJu8(2|(w.DVYB.E WtSnq5 W}(rf/o,Cܿ_?ZGґQu$rKA F[J\%C$l${n8ҵ:hNƆF`^[6ZkjG*X\ɬWNo hf4t:9)zg1䬳Vn;j~v(>`zqLPL"RbS!/9VTOyFhQXH dc l$s))ƁNnb΂wDðqc9 %|%[~aZ^B'mOfVL܇0Ÿ Q9\4+˴!N$UQCb5R)HHeLbb<5,Cy|1_.왛\.`ÑbrL .nN_"~%Auf |XbOf7U.ߴz5Lqd2(52O_tǹM LqSÕW 8#Ӕca vG&V6WJ{f>oΆ׵4خa LSƩeVoohV1oc:j")#scFy,xְ'"eCm55'g^ v+5ra } ~:x>(f6R³mZ7]no[E^.=<["toQʆZeV~Ues9Z\ ͋jf1|gJ7UE; 2hbjieby۔Mga1YÁz_PS94gHiAΕ-S4is~1vX̵Q*g er+XARL[r;$ ,YjU A0H#<"UR c;q0Xi;UFNI!PF: lp6HFoj MHxA Xe;VF;#g` tBָn2!~I\mSDN*۝m{^{&eNhb2 8*y(0d@ .:@I< TbrH8+Q#Ps@}VZ7!Y:[ S:vb. IUp;o ?3cĜE NVF "M62{2z#_JyT>^q6vg,ֿ \>ݶWq9=vxw=a۴AEۡXXXghBfz=z`y+je[Ex{={&C/_M]#vBEy,gܸ\s|ZYK0X`"k"QKi(V"FMsF6%]3'@yil*0fh{yTw>} |i V`FfZ"AHH'`L9R\s|4Atr SmGs.udXDcZ)"VD hڛ3rG)RiaS9:䣉pg=y&CFGw @^z6>oAh@,Ygr-Lm@GAGQENF>U#\tH((-9"RJ,c}Th9%LTv(Bxn[u;Y/:7ZYOW,*Ul8zE(zEY>fI5K?. 5I 3sh/gZ38ru(`㑴GktQ\t=נxcuSc%eE$*aFg]^0\)xL{9S"S[6=<䧌:k[n#Gיj d:bc'bfyw^cWڲ!v&x([H(VA!qp2d{e}0@,N0#v(*RW $e֭"dFkPloM5_J(a5(RrmcN!F\*gUۚu1 ,ѸUgopv\x#Cǣ_)#H!@ 8:'W(UkLk>QpJ=5(vQ;SJV-`"2HhSRc@]pf-.3ش7;94)$6*YUPщM-뭏|&IioIcٟ++hE&9R DXMHbF9,`Zr^V2_ʌ|͹Fd\6B=%Q"r49LB%ŊFmp?fr q£_/s)-.Vlץz?veVs~puu;gZ "h$qGP-RIt bUQP`3~:VМ1kmZ瓃 {WaE\ pv5:2cǛb{CY~bI>9Plы fW#Ug)(Px/\JCɉp !Y@T|{aop^ÅS.ӈq0>}_q(#RόH#N첄N|S r>+rȐʪdS}(ˬ8ڹ@+#2i Lm<sL-xFFeR˭3#eF"]뒣Cuf%"Wg^'^|⋎]fP{B [JXI.*9" |&Mx ^ f7^ #_b|'mU nxݏ{?y`^kX 5Z0D5;Ây%˗õ!ix@P3>RsU6vurZPs2~ڰ]ylH}-5TG7٫ 'WW=:D(1$厨uʻ'vwAl`;iuP5sZ/+y[[F7MomYGJMEBJQg*JPJ,ıh .ՒTu)g𥪬ssL k[DDPr!x78/)'no׿>nf\W7^/=9?${b!?u,~n>ox!W 9P|)FL-d4F(PL!%p Y e4I.o/wކ4v\!,Jly ''2xLT# };7'o)MGNaH84m&ax0宪Xn*r=U 'AXwjC-/)GQ嬃bi4^e tPn~ÎG&h '8[ 6SdP21O5n˱}-d8g+urUT^*EKtں|py>{7ng?+n޵][>~\e# LASDwx2)muI$&Wx[^(b&OײFbS󹈹ܴj"}B$BҖ͗7s[R_OAkS]6I\XH_WByXJu!AOJlg-4rrAh9/m 1'ؖg/O5-j`{vVR ឲv|t[O@> m~#CnCskt{Gv})Rm"arw'w/w?xyz.e|m.(wvr}~䇛_~jO޸y{hE?o?ڼ\=r9ΩZpӦn֗,g-=M]_}d"3ENi#Q_y|xp)eY!?^*3ڗH6+%Y7ƩñBn~!77B~@ Bn%St1ū,U:9YQZT-r$ʶF\ᬩe{ u1kL4vUGc1`I QUf_p^^]7Zmb57_qTDl[T ]N2ꞗCYfwhmš\]Yz-׻幹XšZD]Vw&͟yV[uAv i9WBC")䡕`nKdis-B*Z%D8d<[XUͥ)I^gTM]X} y>[nۛe`D:|sۆ2IKY][,x&n>NCFg`59CCqJ*e AguTAz!)EhcTMd-ً*ٳrytzcٓo%4R3RlKkBN"zA[.Lle0h2ZRHdFA`eKElʏoX\M'-PTT@O1i!i"ێorIŒ BeSPڇʃ04!1ɹZrvV6Cƶ,ZضJRƨ؀جvaR~kg3|ojgAS8D@qUan=kr]QNw |VUv +̀W€OIm9Sd ё\m&:]u>،%2 XB6tH P>ɴDƱʜ Z:(c5ڙlۊ1Do#NC6tɵr.RպZ 3ej1}>}PC>X08# bIPCV)[-%jѴk@NRmic <Ơꄧh7rSt!VId2J%E1Т0WqβΖՏk;qőۛPDf =͹h~' L5 rf9D^^t=cWM1X;]"2 XV?-Җu^7x O_LMrY-S$N$+2#WQ*\΃'V_\/i.&j];-QvSXEq򇼮+uzcGZ=Kgx=R(-TڦE:[ddم E+dFjD,%0BVS߬6{`\>u&)lєK➣h1=!u߿ &-v Ǹo?^!3Q'9LadwR,]|9[rc@U2A,Z2?R%*rlD}>O{ɍ~V{yvC367S/3Y{>Cɗ>С{ƻtVϲ\o|SQ%cM%׺T2nh^ɸt4U2>JA5"j FCW .XJZ5 扮Α[psyⱗ2n\x 5P*\v=x~ԊF.?nyk)<|Gw_zV`3vvhբjapDm(~?,_.nK[Nvo?ié6s,ddm7xc/󏗷FM񝎞w ɓ_6UkƻY5vbսsU6vurZ2Ԝ 88V檬--zT wGU&+oh2>L~7_ARE,&&0zjܑTmןV>]5vKdrPdCtU%m t#{6D'-n48Iha?OZFvGhNC^;#+,hu~,tJ]!]G`Dt%=hE ]5~ꪡd;ҕ]Yh58jh J&:Coɍ0+ ]5z4`Ck骡Α;xDt%OUC骡$ y֣] `q7=*V ^]5zs+NtшJFX誡:]5&:C"V `khjw4+ -l( ـS#+# 7&Кodh( ~?t;v=SB ObMՅqޱ>݌NO{6FkO6p4 MZx͈J=#]5ΎZCDWgHWV)pDt ]5ͿZ?xuP uGDW 0U8^骡4f3+T]5n7zj(DWHWԕhUC4tj(:Gb]5n= OC\x$1ҕg-pXbBW@kR #]yR݂ Ջ+R>툞2#@&%'+;);xb^cH `seẸbeb*QJ/S-d}' 24t?s=[UgTỉM'JX[M6nBi)=3M? < mxOCyhπOi/4C] ؿ'CW 46:]1JDW[ ]1\rK+F¡LQ\\]pRsѕQ6C+F1wvtb>ѧ2 ]; 8\t$\6`G6m_rzv-/id4mw'˼6\^zu ~8B^vmő{ɿnyP߾p3 fcz??lS6f:vhU>[_o7Wۄ|״&֐"̿+_z};qtG$cccu}n7k̔=qn[o#l0>UxkGe?/,@E D; ebf,j/蛏3w͟3S>2@`hCz{Ϳ&Knw[rS/oZ68RRφ5[k\]h[v}I|u^WY%|c#>g$">]tz⪷yoF30PEåȺ*-QঠJΚT|q:+gIdƗ'y0\_tJHB:vZ*A u֪ܔ\J3Y3,6lFP?8F:>_:аF*aMҩ` .rN rSNcG3PQCm>+h-a[>83 AL7Xu]E b:±e6P6Zb"$lYX@f4giF꾃mQʷL\@R7f=ak7qjPѦ"jх6[ r  (lj&^;O-`׬jX;M*]l`_Q&t$6GlI: d>Q"R}W:a @ezm|#rd2 VԠ uszR uW4G e2a̷.Q9Z Ȅv%H  OQU0uk)& A'b΂G ݄bt$LN QCF:CAΛ2MC{X +TtgqJAQ"Ł.0͑&A(gIwB"=dH_(4: qJi0+_CPqj{VQRA}k󙟵0 2+=tDRH NJDA e͜`3d5>! Db$ۃB6xZQ8b!z_4yZȠΌ>H+>CwvߋqJ|lPBr|\1fPTԃ,0F!NHe燎9;lߙxc]njͧ5m;xy_ UU L0B6#&f3x:p4@.AϛБJhIWo#f BX9Y4<#y0 _4+Xq=^y>wyv]8r VLSе2"1wP:t4j9(P"QPG݅ZR@aF 5ܘ j`Y+5Qv5+ڱ!," !PVvܬe$+oVQȡ`8#J&>)يG+>n4XdЙ$ fZ&ה!P?Amj DQ;Pf'0qVcCe taRC!ە55KA2Zu@`5!gѝ5MVU#YA jfᭇCi*hhǗޚདྷPE;&BZ&B퐄M٤$E}>f՜% C8˾L gy㦗ק+X"wiO7}Ϲ+Q͂ ]@H77kF֞5`SΪ4QG]G{$NIQ3󬑲FhQAi7NKo=H#ts80)QC^"$u-ҐQUr#a?EtNJdt `%.(H H,@z|!(!=':T}f=. >nO`E^1H"^S:Ԇ&@u?YwtH䍡"UA  _0ˢ#ɡb$Ucb<RuN` tB)1QFǤz54 hc2EѴR+5֬Y u(P5jI:Lj='е-9V#{*6LLOfs jX*C*mP zxi ֚&-m(-\i䋑Q+Bjw@S AO8IZ{m(=mGe(utqHێ4JCi1[.ՂʠҘ"&b9R YO`3VlCl.Xvk5cR⇀PQλU~Fz+ZP^) cR/^yl6ۥaãHBk5ea'SfRx-|}etpƭ F1֎wK~w{.V VZoANVwk>;ݿzyuۤOx/۷/c᝿[ͻ?6[v{y}6>I/y 77ۿhu{ڞ]?ݛ~u:>W÷vK7wwж՛|v6z]0~l $'* 'FpU@PXq^\pYh2gp tDN TI@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N#v͢@(rpr@@@!~%q(9aqH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8  '~1N Kq)N F ;% tN 6Z@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $Nu鄒jIN 릖ܰ'ӷ~ N ^d tN KI+qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 $N qH@'8 zK/x)z?\\{]wW@P&.F|:,Ǹv9%uK@Ÿtƥm3vItŀ\tpZ ]mQ>x4Kl5~)th,BWGHWm)n4K+%uut(kDW 8.G]1DK+^͡4NOCW3>`az'Vng盧 >Cf4No/* ~u%^?{WȍܷR$ ,w@>6dNpڻd'E[lKnǝIq&>E{XͭQՕkh֡O!JS2p A\17~RIf1Yu-{Wה k&K( DAM0Pe|Q+ۍc+ /ru#SrFUj@Ԑ3Dd5^F~I#?G3Z05&'hؐ&K^(&vc i; c>yxfk~56nwڕ;*^QpRυ>WNf5_cXyל*[?pwF*GZ0#-qB5ϼJZp{ -qhЦF x@tUtUQ*1+e]UCW.R ]TDW~ WUEkd骢ztV;$x?j0L}Ҙ ]qb5^}~pu^h0vUQ퓮 ]Ykt> 0 ]UBWV}HWoEn@tŀU V`NW2EB' gµ +Z=]U4 Ei53X7]U b(tզ}0XQ8v>7Vm亚ߞ8WTi-AĝxU^>~pcvGJrJ׳1;܃qC^:$eDW0Еt$ *Z%NW-ҕBB *`CW.UEk{OW#]AX7m]UGbSCt(#]A 7 jIvء%{(`Mҕ!)4jpA*ZK}q7HW@ ; *\3~쪢 ]9DW ؁ ]Uv0cW->d#]B"0 bVU+q(tUBUEi`ۡ+zfSdž;qzf"e|To}zg\:MrE#Ǎ/~47ً6fwԒʹcZQ7h'RfRqY󓮽ۨ㏄{y?tOA 4f~(7B/hnzItX ΰgFI뉝zNgg7 &y y506>Kxq_ngej?XUy*D!M`@՟˒J+v?͸JֆUբafic5_>v]qR4NV}/_̻տCNOۼUɛmum.̛>%e3=?;s^,w`%ŠՄ͹x6Ix)ڹE4ū֬q^^c[jǛ1ynR;ƛNEíS+cYf$1GT@2HBXTS:Nm. |B[g|`S[/͚;ʹ^;xj{5V7cT1D}7^4zXs}fΖyp-E_pzs}_|R|Qv6y`5X!QԠ]bׁ*DŒT'kAcEK )## fS]ޭ@|6|r~b[ }RUL]\,Yy_[L7w]IwOzJں&"Cmij\lNIA טTԃ#ISO6ߺP@5xdGO%d F%{FD;<[u7dztQ-`k{HRTTWm#ʾPdCM<44=}ijhrGLcdy8t#t=?kk:JjLt=z_4W#5҂Y29'9F=1zNs.TT'#<KL&S5,H6fr&C0k" theH:Sr9[S ()@ꐢXgH\$K=KlǪ38Ϝx{#C׎ ^?j~{~dr{0d1Phw2"B'[!eL2ЪSG )A6HP(fSPߍK٫(H;ShDԂI9cG ӎ6k ,e.΍ݼt]A1W뾞_/Q7TrFuu3ɫ7yܘy7]TR{|]w,ruj<>ߛ¶NYI[_FEB,e M[ +d.䋷ם2EWR*x0-SOѸT(sRl;, Tkf ؝V ;Cv̅fW5n۪϶CZlglt9{y\-^ ?L\8c rJ9C@R ښ!kd2ULD }S p5`'%tJJںh!XHNQrA l멎l].S؛+8cFđGF|tZI˱)$)%f9T7V+$rR\ 9IZc ;eDB)uq6%$;Ry&V:1F{wbM1#viF"sSyr>uvf%9c^G^| l1 g VɀJBQI.G^| ^ vf͇@al>˚/UG>98i!i.X~X"qV?$_k_9ye;t_hzq*݃)ku+bVp~t7?˿Ow@̡aA{^D%2϶FrWSWʖv$dXZ #\PrvC`t%,$RTHeT4j݃} 9qpm =.x; ܜ{nAHf<'yLЎ!̑BaZ&a|1L}W4sB(jtAsN;0ō^-Dw\^|™L(K1;NiРbUNF+r%.RXtxEIb]M1,C&RIB&փd5rs]yj[rlqus/6ӏ'K/(ep}Z͘kxlT<1^=^{о6 XJ5}m*ܯoЋhu(ڼ}mp,f;[]P+{6LYxxF݂|%s-#5iֲ :-5cR-*}uo1:=z e+GetP3)(6qG"I%ǀr)";ZO;ҎIrFFF Q؜:sWpYv^S%L'(pz=cErMSan&+Ӌv?iO<^|v&`jT&tMΘtBAxQ51CCJ† pNR"ipj#4WrĆHHuF{nFHSH+ 8%wf9OjGw _{إ|#Aőf/fgNt4 hl*daeǍErFÀ{U M ,obPj 2H*:,q{,6C)@mw ,C mٴO Ph+5 (K Q,KnvRզ%ENh'7QFrc2G}(E)aB& v38:H_&޺oHc d Y5x l"[8:k8K8p1hTwHC[|?v'd@2Y,pa,ZI~öZ-Zd]UU^O5Vǐq@>4 3-o9D 6[K͔䁵x5VA_n{w̽c[˻c|)0tcaX.攓"r1Rn: Z}XdJ(1hj9rzz"PRCU@o {=|ٖXk<wA%6&Z0Wւ"8в+=aܝ2Pōr_Mo_.bT b4 q\,|.~DB'Eq۸ɕ>^}n5O>[EE)9`ڊ6@YZ@ԖcFAGi^R y\e]\wrs_ gUik]=^+dRuM%wzFsv|Vˀi]pQ̴:%=ɋ("y ɍPDZdhսv" z;&5qiVSt|dODK :77P f o (Tm6* gX0@Dܡd( h$e"DYu<~o|]{go0Ɠa6oyTȇ%-z=7rnou֭UHZrGMjt-BytBe_fEV~B6+1{nb֕Ot-#7RBrՉ/$5!eP9&]liђkŞǏPMsKyNã4I2%"sDo"xTWxECP" n}^D{Z(k$L 4uo.gzL8y'8|&ux#nQiϽ!y}|(ޫV!qfCIE&D[RrAGĐ%3]:h?? l]'! .%s\#$#\-wxJ)N̑f<ȑC5aIzQkKDoQ<#'3rJ Ĭb.Fŏ; 9\21`q )'^0F@$c@%0Xԋ@"sFQ_iuWK(ӑ"R6"SLy@qR= jŨz;R@PJ\ĸbb9D<>9"(w JPW(y\\O}Bg-nyϮ5oꃷ׋6b4zE̕c8NEܮv9R'㟮Uw~ׂtLBnm>l2{ZIrԦ.d.^pMb}=~DF1&!r`x 񽳞y"hBP0= 4f[VG˩ٰT.Gm9\M R奜DHTj4#I &(ۣ9;Nt*3L^rCtn-:lU}˄HF}r;0?dH) Ѽ(G`K/?Q4qmıUlv:B⢸Wv# o_oikYêm#]ZUAdN1PDKZ)du3=B3E}tu('Xoz)]v8],*_xu?_j< VlB DC,/)I!Mv12I'{$!U×\*WQn ?%iM%" P0?g~|kΠZ~aA8*݁OooNQL?!B1MDaxp48#J)D~?ˎ^Bgn^l> ȿz5nb2Չ"!B=5Bp96K\G)LVt# %w$L<Hr>8A cO9+ 8uPQk5$٧:vGST;,,Qߩ(=-/&iI|TF GOA~|3˦R0|*J_i6*ɥ{4B\/l;ڲ&^+9i#&(- u*1e7R!3)gzٔӍXFD iG,85id˳XG!߱9!pq}LrR!HN`t-ŁόT_ʚA.⨧j'iy(ͼWHc,~ʉ g_ѻcei/(}5,ܲN߾z3?<0{zEF N#AdQEmn{*x-g[.t7.x&1B#tex?=c]uPY骺92QBw,dZ!( uV"HOJ`ݽ&5`ec1xg wUs>7ACIFٟTF4797=e,?y)j Sķ&KR ajiՆ8Et,d綮-5Ӭ]N{3dPQRuc\Um5U2k] jm"VATt"j␂98p !UK*TBJy{žҟ)KP[_q9d_:fn>w6C_; Kt4Pt٢#&n.ep47/Mޞ^LViy J®#lhZaENj+߆%%Y;,űU꽞Ƃ˲" /аIzRY`D866-]h;K Ml |8y{3Κv"&ߵ[>N̏\'4;x2|g>ٔNj SHry6%;~cyN m6I/D|?yLyo[Zټ<-.zE _BaB=J=~ۼxku#90qj監֝rț?śu|)dןK?Ef]/?~|]F#Q~-32 YYlVO} vs#W؈_EE jґ{]y4yy~W2+nܰ}N&AOU,ϔ<2NF]_{'|ӏ&u7cx¥qn5zru}T6[YvYoYf+ѕ"+%jB!#] pltJ\t%RוPŀEWѕHWDW\.ڀJ(F+u9EW V6hHWB:u] EW#UІLN}WA[`.ZJ(-H;ksz2(OtŸB.ZRוPwʝ(MqJY+k/tN>5(0wk7f6V;3LOHne&v~hWS։[|cVO0+ DOnSo2YV~ei|dxu:{kcxc 5OczN}}z)[{7W/}-!޸zG#iQ PtP"TV/cKKփ|zP]1{JWzj"tΨlt%+uJ(=]PWƸ)#]10@ݡt%rѕ2被ʚ4d+kHyFW DWLUH]WB鋮ƨ+Z;b\0] ֩J(ƣ+gmP9EWll6\sѕbA,OG+oPNaWkѕZCuP)e2ķ1ȸi 2}TS"]QW)']10| piN^WLi)ft[f=sl}ݞͦYOnES]Z?TR$.cP5%gȀ'hߝqX-\=?>Gۧ)֙ŧMβ(`)m=[/>lRˋXrQKg~{ŠC3ŶSF;To= $y  u xބ5-*Wwm~:O3Etq$]Y>oo6z^ry\s2PWj֨fJfDlIۈ{׮-7 ޶K\%ZSS9SScL5ܔbMe5SL-MIn ;ۙB tgH7zZ] pڏ27УkJ®YHlFb`r!] nй W2ТѕvUNѕ|+ 6]1SGWBYt5J]Hteq<ҕB6A5:u] EW#ԕulNѕ|+%EWLKRוPꢫ1 }0g+f+zq?Z|ߕP>F]^"X9xEWBkT꺚Qꊃ+rz2hu6\̦1(!J( FI`~e+ !]1Uw %@uEScPm>AECPtJoX)TkSb9qtbVLjDdM{u\Qe?\?#~4{Q:"uYQϾb`:] JhѦ+tj24HW W\t%RוPbƨ+Vi sJh{Iu%dF+T!@NOt%Mci 0u] .1qJیt%ѕ|bJBu9}Nb`WS.Z|WPbi QW 27v57buE@bFb`!] Jh1ƠPz(ftez#FqIn\}6*ɞ콊)&} َW`mxt2? l<3rٮ= )^dg[HnJ'ڝpen#\~^`அ~zh@nQ=^JkZ0ka׬n%0G0ϙѢN]WB鋮ƨ+O`j?ѕREWL J(5]PWF2>#] ppJp#чs/x\uee+vDWڡHiEWmNA~E+5] -&+'Ut"r#+|y>OU]aWJ}>ѕEWBKw3%]QW9HWj CޏV'JS QWdK`^^lt%F+})וPp[f=Vh~6 c 8ؼAKt[l;Å5ݏVI?JXCӶhz׬KZt%/r7\tŴFu%F+`x"lt%>+C/J|uep*#] pF\rѕJ]WLRۭEteA{7\t%CEKt5J]!48+ltŴn(jr>+WC.Z|t%/z]ygdPC>}WKt3u%FƜ]10bFW(v+$U2QWe+Ne+%McR6+]ߎpˬG٨'xF=EkTϦO @YVCnѕ=a-uŔ@-z]mt%ѕ+ :ƠP*0c!#] pJpڋ*2:^DW.8mr`ʧ1ȸy2(CMۏ]QW> 6#]@JpM6Af18F]rҕc>Au4'w% d&#]15t ˦1(SS"ѕ2Hx?) L:Dӱ4z~訲. ^ۤ~:>;CӮhz׬iHWqq}vh%+zbxte)sҕ7d]1-*H]WB jq2Sl+:u] e(1 J]>ܡ'%J]WLKt5J]9:`Fbgb\^ZmSוP_t5B]y(#]1]ט\t%C GJEW/+e+z҇="O@/uÃ7?n&\-{w0;R9f2?ٙM4_1W~8p?ENf.s?\zU}v?{K+ul:e__ɯxvvYXVJ_۫wx6²:/r":|!F[eOxsūX~pvřdB}yp2[zݬ}y{o :xQj'NY{2O$~ࢸ̩w aV^Le7+ ]#VQ{˲>Lv+5|5!uG^pIm/~6ח=VSƖ]N*j u4FcZp43j2j \ X~+/gݜO $Yu+yo|הuy==hߗ6:E"\C ]zUQDBcI+ R@ײkXYQ UBQQ)}F׍ѝW?ἃbjÉDP_݊xW7m.2>hj-_oQNU+@659Z z#JI6-Pͭ*mùd hw`bWE2rn#-_`[ 6CJ;TFw228[lleǦݢ,=8䛙f1;1U8`k[6PWW1*=(ْ|JlH24 J+UXye#e6\tåC騡:*6;$?]UrF]&6]|WSVZ>ǥE@x+",IVXpJځgYSťbKRхTo4:G{s#]UWmYXǶ(ϭj*#4۪3TK#VP{ I:, |"J}W:a j*6 Frb2*zSJi_t#fH57ki@6.a !Z  JAU.1a |k)& AwŜN04掊CC\jSS5q:s nP`ɦ= plVz bMEs%JGMeԸ`j0eIw(Eye(_u: i0֭ܕ/(!8. RD^QP,f^$L'HH/>XҠN0p`9$ Y*kO"1PM2+\+#&23 _ݠݴbTM%^1P,4; zP0B\ 3cÀm/L_L9嗧|x7`X*f=f~pvԕ5͈ G 1=*T [АW%sJٷƮ2AVìc,O(v5!,,t`z+R|"Q*L&jZ#T^S1`X Õ.K}tX3|dH֨Vnx ;P< cx]`A,TG7ßhzXżs  7j%^ Da4`M0~~t/gg+oͻ:Y.Ru+`+tme,c;KrIn9h /s` QTwKB*q(#(v_I'`ʠv}* mM!g8Z.fX;6 ԁ]C:)]`E[Le+o Ѡ;#,/@G1۰,9(I@̴LZU ה!6"73 7ăʰ*+ǂᠻ8+B [ىBC;j^!=hˢ9;k\#YA pf5ڀJ5񥷦:x$`NPjw;$aS6i` nᾠG÷h5W jk9.m/W7 iuty7ٙdǏn x@^,4zV0l\c F޵';Z,zg:5WZ3G0Yd-ÛAiw('g-7\}ʌ4IMG^ÁwäDHZNU 0C{&:؃NJTt `XRBJ;hJy;:V=6ÎWVh׭uSjC|r7p4rJAyc ݪ #W(a[T1@Z*|TivWu @* ?<66Tc@sກ6~EnW vpڠ@n "N֔7vLZ #+Bju@SQ Ap8I{m6!,yJiMГd@b~'…Pi7btc6\WAҘ"&r:@jҙR+>GR!6.X,{袀k5c4&Tyg ]O|ubQ>J+e1 ՠL=˻eZo\ &g2e'0By* 8-gaN_<}RѓM֘`;,_m=g6y3ޛC,=_KŲ h&xG<_.YO.N[?:zx>Fo>1N-/pWW//O<&!{*,r xF9kQΚњpI <C`I fX@[qI=$odp$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I I=$P)9%[$h6I @@H@1 i$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I II&EJO(Mh'2yI=$PT)F/I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$$$ $I IIH@$Iqk~`~\Hp!|+GE+\Zut(]i*|`C*˴3wVݾ,[KV=FC "ோ9*>;St+q˳g ۗuMQٓ~??׉ub4|_qZ/lNyL'{"xw MW//~^8jcd̖T4' 3y>8۫}?:'VM@yevgZ\r%~1QS&џ7HOLt3UmJTc4ĉ{urH$i; J,_i iZ[&O)1q,V Q_ϒ=D~Dux5JSq%a:Z-zW/($}^*S&F\2MCUOG5eV]*;^_cD,Z]-֮6f2Pe|X^?]gNEщs'<6,CcoyG񥅽;~p~~hM2K tGv BzmnFt%;b;]1 tʘ]1`fCW 7Ά6*ut(zte1Έ f.th eztES3+l&7ڤұBW ω:j>tp ]C,tmiNtma>tƩZsK2C@hN=#b3Zba6&}t܊]=DIU$c ] b@t+$+"]% Ii(+@t(O?]1J-CW>r8Cm]h˗}a5$_,_on]܂s9X(HALXm|Nk1$h8mJ4lފP)̳Gs[n9-_ᆰxӫo.Z._[e|r)ʪos{>*q:hOw9 (ܕRj"kɻU٣$o_y$xCO+3H&ĨvMvI8o[V/ZwإCIGv1>(z,$Œ͆R/ BWz+v_|^p1s+ ;]1J ]Y.03&F]-ћAˏ3BWvfDWM ]1\BWb$]9OQI] fCWkB ]'qJ Nҕ!87#lGo%HW qY\\ƣ2%z8t fDW 8ÍYO.]OWn1'tY*gvŀ] ]1\?3hQ&?]VpdNf\Չ9Y,sy^C֑>36zv= jqUsy2^w2z(}:xy܏2ۃ-}t2j 㠴%2q8oc̶M[Sa &}?xo,IҦ|3b2_緇xϞ=8r濟^>mGUSܦ\ fuV0b?WwG8kz}w^?>;7-"XW~pq7ﱃݼݲ7s8(>=Ǒz;<"h\G宻>*>nL}خn:ǯn)u aLĎN" ӫAp2uk:zIP)R@Z"$5h;ࢶs[ AEimۊxNЌ}e |vki,QKLXK”8 h# fBb=FD2x,ܕṗ'a:j0C{pQUTr\]|I7 J ({ ~Ddtr}"*'6䔣GhQ U'G0&<ٙaHgeV%]|׿oL@G*'W,>b{$0F4+٤:,)',)pcQRJ9+M|.dBu{. y$ (wH@AkLF獣RP#bqTy$GW Q2hsgTXG"\JlB#y$LViǬQFYJy>a8z<* ҋ1X7=?v,fJV@c"f+2s /!6mwH931BFRo qIuρ*ηhn*Ըsh4|ْTxGFɜRaҌ;$8a# 7r@Xi:xzeNE, F1S {E&% EѩX4L E?WsR"'’#ED;6.tA)RZch0@6H=3]b~`B3S zRFs^/c)S 6 AB"2X)8ES70Evr>(JAyVÀP& -sH]#kt9؝4N'¬s$?y0ZTvѰ.ۏr&XArR ^աDJ,ÿW'F>dcF&gԴ)V!_7/OWyl0ہ2Si<騞[7sfUg@~ɝ1C'{vuCѺmbyLl^'4IgvM鸼fpu{mkXMUEpehy$ú|~/2Uy̒(+ k j`Y:s/ N^<{E?_w'g?<;y)h Rt/B@+]뎮_];Vէ3|\9<>o0|oD9id$2o?́:Q׀vBb{f}EC7+X!w~ě~W"DiXˑnx+wmaVUwܿ?hMG#1H4XJ Z}oQ!=Dq-W[V#}`C+\a:qqN|"Ro"h` rs n$6rR_9-x,7aO=GM|%iwӧ\tJaVvLڎ Z3?[fY-yFg4ch,F#Di4G;\~YT${fZ( pDxhs ߎ`0TS~V`R@)"%`IP!8 3J[sU+,R@3"اS8))X-;Ŝ(aHς7tn 3AMi 6EggXn_3 z0ۻ1y5ugv؇+< "`s( vF1gк "N;XqE 0G̗DO9HNHX F(x+ v zFOTatqZB#F2ӏIqD[LKp s7tU{MTAFIM{Ľ"r"Ieq٣aY/<чᐌQԷg77# ɞ>xb̊ʯ\EcLˈDR !J}t[FR0i 9=Ӆ,gBvt\Vv/%ηpt~N5+ ,STZh\9_0+>1 l$o:.gyuUgL#?=zXdγ,.Ӭb6iQ5JL\ln{$c$MEF2BF-?oaV\u'dq<$R**6mQ` ?g٬pfo?Xv K}V4Sz"# {KV1la 6"Dփ Y =h=E/__XKryO]SSkrU_*|ʪkhNUcl¬&eGǘuǍY iݬiJwAԛ [TѼ̔>ɸua&QJ!)%rژ[`۞*YkYr5g˱BsS?qy鸬vRq>1c(qU@I6=tx5+,~יմ4|`&G ++q7T_n0[Ɂ)΋2L}|M P<0NiLLQhmq FflVbiK=ftf1AM{}3 ߏyI59yo:@ܚv l >bX]c5̵3O+aCTt(gpML*J=LY$'ڱP8ӽP3!Kxd9Bio@<*cn2`: 4Plh(H9t ^8W8:^^@OGUmmܱx謏j}l>zYk96jƓ_6Z Ie /?ȹՙ'土bi* \rmP&qxCbk[S]: YjU`[A/rQR c;q0Xif{N*C`S(t@Hm@^ 0)0-*%Ug7t>${b3yl} Nwcזρ6lNc %sFxu(Ck;IyRd"ID%ҧ=>)ޭ"LEJxfR>8?ρ?MeB8o_=@m{ǽTz봣P_T DKfaXY.2նՓc Th_Pi6Ow8izn[?I^Omn[[Y) ی(k20LV?j&]m^7j9+j{d~~8,ˠ8{AԘW\l ܍a'}5bL<3-r*,[:?iN LcrH8+:8up'3K>r Ro}!=N}x&&VQRk3cĜE NVFQ6Eb6B.믣Ɋ[M|TlmQqfY 풉ĺU{eId|{9n۵60ut;W)v h[|nϱ['}XA_ꟃdt=XV&xu83w~dWI)Q֙o+VԸ0m_z g+{OoZDzƖIȺ y 6En\-͜Q~xY)$m_wKyμ|MMl(JڒWӦ{%L,2瘃MDQ>wWM=P4B6ߡg7τ}3\ƍ u~{p/ ƛ*sns*I{[UFYUi;yQݙDkc>9:sns+Q}tuud)ϛQӲxjw$o v&S똈d(c mC-Fɴ6 H&mGO&Ge}Je1O'k\0Rp["K57Wr9! U,Fl4zµKg=ѦYOAvC@c>bkú][^Ti!W?W>Tx-O&uJn9F8|;}Pƻ.:>]IZ͋ERjMj5gT}~ҩ)8-Ve~N-m8 1yɎߝQ6gzZDHJUQj[;K4*:n.sz!R)Iܼ]`w2iPJpn}ľdNN;#JTgs:n928ɨu'"i\ZwDm;4.zغz|~pS*.K;&ZNNu >ys̓liARG7ns:E``o;YɾxS/.fY0MuuE 5,~l\\x#6?uU~:̷ʆ/0D`0RSQG~bC''r3zǛw?W~wDDH Zr/s24m|6cajp Óo 7~zQO)H3 B&XWdyaZ3/ ^0 r4x~(nzГP7^v}u 3C?џF8NxCWyCƤ J[}lhZ 銀}`+5BrJH]WDlUue^MN}l<4D+ʧ+ yk/uEkHW%WB4]m}ԕ yNB`/4 W]m}WDz+j  B$+ҥ+T+_A0uT[0Hѭ"qTWHJVu>6^඲Zt^2#A6\lS=AUltE-IilUuB1kFW . ZH]W e]QWR1SW;v=_>%wE͗Ϻz]YZ,C0+•VuEd]PWN`9i/a+6V&N*d裮 銀=6]HkO]WD 摒 F:IW@Ulډ6vt.Jz9jfot~GbOs4DW quI:Z2҉j2B2k㵔tE޲hI]WDeUu%- A:6"ܮq&Q:n+MAV+6|khN]WDlUu2FWk]mWQȺꡮ!YbW"ZRQ\꣮! 6jp46u \Uu2SW;[>AultE!7HiE]QWvRQFi .i 몡㮞ԞW#!9B؟%W){W֫w'Q{.̶; 秸NT8]k|Ng;wD=+;2ֻ,wr:G,1Ju?5u=a>wĎբ!qrD*rޡ TYl'eNX9ur #MppMmW3P(uj3PER`8*/g+ՆuM([J+]plt$](U]QW`+6."ڮfQ65Qte:銀d+ lhY>|(Ⱥꡮ,8`(vcmH^WHz+'!$6mp+'VuEJf]PW^jm #]!c,P"\`$ZR`u}/NWl$\ǦhCH]WHisc JzMcCksƌ1 3j؈rˠD8ܮhtIM:B:kQW?TW ]\tդ8JYW=ԕTJHW۪>OhCBJ"몇R T"`p+ "u]!%\꣮s-hԮ빒q6Q6W2Qte4]-q:v&몇Znw\skWD>i+. XI>AUlDk u]YW=ԕA ;>"\Ϧ1H]͕lX`@iNwE@RQʐudte  !\))2W2e)Ƨ?l }z[ElI2igE9G6m©iX6Mk4O ~͎T@T)6ʊ_kqBk!u9'Dt-ܵpЃW`-#]!pW] (Ⱥꡮ7KFB=]Io*ҘJy\`+=ƟW."ZcSQZuC] >銀a+ĕMR鬫4 2ҕ J >B\e ]!9=\WH~!u]@qz3tEhO]WDi|UuPSW;k)p^#6+tuC]1Ny3H`hN]WDsz;+:U 0dkj٭h]QfFf];Dkb1>v2 /7/賋|8hiUWoD(UȚ%ҒW?䮃tE|(Ϻꡮ!4#]wltltEZ+vG]i8h6Bpj2EWFd`+ljWDk]UYW=ԕINA|,]mu^@Uu夵F1;pM+M_WH!7+/2 |tEx3M_WHD~3G]%uptF"\>"Z|Q<<zGx%lWVQ2BD HfWq{n sZ[CT81QQ6G͆;QmΑWD-US Z>ՊT`2;tE[ZQ5yp]Nj\:!:Z)R%かZpkޡIWXFҏaBڮ^PZ!BUuggK- 痃x1φYS]>V:Zb1{t'#Ea`xz9x.QU? V|Nd9{ѠXy-tw媸,W 'KA|?nru#~7AUǯ͛`\]٬f(޼+jL" ]уɦxPx'ԖM?}M_yͫ< p8R|q}.Tesn#V;)>Mrzk]cU?3~6;[;j6 K gG·K?Vw>͕{ΠQ"bd{|gq_w +;rH!vwA dt\-4pHk͞[$_V Yk+Y%^1^fvz8VsCJc*&=TM8޼gk^1*jEf^p9X^Sd>I}8e>dV#+"2>J7ZɌƂpjo^ߗ1b>`6#=t~5zU.;E*ok(+SMj3vPN;[U´vӱ%~(3t9[O0 /^Z=Hp^N@(5LQSJ51R %z q{mA!@\Z4%eFȁpƞ ?l) Qv:7s,%wL?-W_j_i@lv~ϗ'M? rQ/m^EhI!ž4QִN}pb0='Qai-'1+׷ٛdޘ")f}}SƊ|=cYNX7Z`*"^M1*RTz\m?$ O.itoJ VIϚou3 ?ln|Q.QMoec|:h\kr~|C7W6r4ץxM M/JAY U`PZq᫱./{6hpU>%ݛs i#[$?-zـ@0,*XEݿnt7(1:g>pkQ=G Fc=))ٿ ˘܏qOК']71k}#~`b CT ܹln2 cBdcOB HӁ ŽL8t8bV lm`i8fŎ zYTbV?ݔ%2co̾8@:j`GӘՋAx_0jWvPG[RI7G7!@7B0/dm%Ny/Ldb=5y@ހ?jVw1LrW=_F&+Cd#d lshp_ʐ 7qdѰ;Gq3)11H"+HJ0*-WkKr@퍋%@@c% !<>6EvB]?%`8B%2BM ;;ũ88FBEm/U"O@'g&1OgWu澅l<],6ojѦtq_<7jeW))2.^(d%˵,!*%Tg݂gC"vAwtVmeM06S\2icg4e;meI0-;!iLyFe( a1DBdR")`v>MVu068BY0uLr`ıZפ2Y"}&O)R9)E΅$%ԋ9)E& @>I!((@d(D){:'#jD!jVi0|c0DٲsO.{ m 6 !C;5 202 6՚ #G(#C.vu738GPGP߉݈7JwiHwԓDŎaN~zJN_z?}I_{I8bhlQjN#/蒰c2c%A6885Ewc`>k,K @K)p0FY+$6~y`.l1P*܇X*kc4Sh(@"B*Bf)ըDBZ@Zj RfB?:P S(TAvXxm"/!aV."8`~bVw)CA$[8!?O@(iO; 2})KO_P2qChQ7 BQwJ?`ߗ G5EvENQiYmXOUY}ڈzyı5"F=>BFP iޠ/"9tV28DK'/{Z/ 2Տ.0|T O9P\cY,Ȭ%/!^RZf/0TZ-p.OCs`?V&Vb[@9^+qlb(|nC~&:m"/!3<M`%Y·3^bV&Vb[@a9'`V&֊ƬG HY;o,]ɡhGy+E?v)0k/ P.Lj0^Er>j51>.[bT(r1Ͳ[ޢ.oVi̕Jrth">ُ0ځ+>ُE줝c5z'ٺlJTz86^k>[TFłڏ֤p!PR c ;֎R|FE4\ڋu, ݪX;ihT%opOR *%۟ohdO sOur2Lhv߯ޏ5z_Cϴ;<Kq~]HtR',RӌAkZ"W+I61ɨмS/%ei5k샟,{M4',- %$Si"Yi,@AKDk5M͝8y[,5Nx?~̸ ±3J܂v˕Rn5MV,TjT\T{mXr>*aI,aU 2џ~|/ /on\-'6Nx,VkmL%zb^U?^X,-$$ʂcX3xw#q|vR繶6/!&%?+[Fa} ?F^.ϛZ#  deDmS ˦g:a GZ_ބEwgOU(g?yQuuV:뤃kuho/ҖyͩVk^iAO:N;m Fom $M'hVj5_^}d(@!u[Bt>IRW~%e4 )snrk 3x׆7+;~X]Iw]DWE{JKKL}N%y}VO!$7kI /)QcWUeOWvM "+'ȈaUxSe>Zk]/4ƕòmFF =82ҡ4-}o9BӇ0$ > 3!{s ]ߦKJ!jķ]zL:ϗT6ƚVW?'K_5 6ca-Ud M~6-K//P,Q)<ʨB rh NJ %!JG S\,F !P 5?Y^/$٨\cb$AIň\jC>f6}G<;B2ZQ#NFB;jċ=kN^ $cEY ZRhQ *䒩2K!0 ,ZU튪-~:U9*(i8/034LPELfXd 2T?]~^3#b<ϊqkFin|'S["G.>~*3~+m,b6!ᅯ-U,JlkN]wQ!4_@{!h %6J,nnod#]Lq#:kGrt7`l@.Baq)#JI8D2vM #4M f# )"d׺n>0aV-_\Q\Q%Q'C8c$v#N u!IL*]3dvBGߪY`dA '(Mu]S=jf/$1Jɥ[wx 1L!0Pv2a Mǻ,͗ TS̞Dq- @-F2dɘrذvl5NybDzk6 q ᬦayrY+ L'Wjc򯄳sQ&R, g>t!Yk Xja(̛n Jll eRNi2rB ;fD ȗU"@" G߯=h2Zq BAXsYLנc£$vLK3[0% WgNl޾βt)!E/HL/Ss IaXLȋ2jR]<-Ryn`bB@bsڛ ,rU2R .L!#@8T p ӗ2K1˯8πh8cLobѣco$ь$h󖱒e%oAI@Ncbl])H,?鐶7wBSِJ@G5BB@#ټ4'Oq4`.ak{XB`ŵ),!40ȚrB>XeLJoҿ)$jkw+3X\M2輎H[(> ~A|,k4f ^Sig4LMCT#R(7e!\SNJn?>7vQ%"Y<<)Ѓ"V,&/ V+׆GV}ޠ ($H-d,IQ E9M_R^㴾_[;8V{ĉ7w!4j^>D('Mdힷy=L1e Z們Yc` %|!`fЌ\&9˾<~8؟ m8_KtO I!TE3mPS5gײV&H4 FrI_`zcN DVz~V@GACIcjLV{ft8\|s`z"Fzt+z-PNA(Ϩ<KӷWh1Y۫kZ][Db,^ OWT "}`=Y`mE ֿW )M緱U| "aނ-PZ 8íz&_u[Xa}(Xe[C(aucꄑbsܔ!!f !(ZV?Csս1aTb ` O7*p(Z23QtFæGM'H}(R{Z!&zurl.bov%_ZbԖ(/h6+H@ٗ *NOpxMb;,($Wʉ4ᄐpboH)g]p>hR =Y:h0H%IEb)xԿx< :Gu)aɒpCO$sRo( 2L:E?u+~5h9ڽcgJ2Qh7DNHFDLJ P$Ggm<k/#G)(]kf5 j҄.2IkmIa Ӣ`fK$*U~dS)%|Ȕ=);`#0^Y4yD;D//%4rK; xгXGR"{>!|)6NbTB(+8]^vFg@tvwIN]̭6?]k%1'!Kqo2HIL$i=Rֻghf^+12d5#B4z8F*<\ޗ)-Td)ij S8eI&,ŨFmN2GuMIx؀#k_'KǙ1 :`=[Eǘ4@5q'52B$P/ѐ/CUAv=RmguтئLSIgc)fړcGKX]˴BkAe4Zr]b\tvYqEf`řNJd3(e&jtѳۀ.)qeq-m))Ue EQBAHé.gu BZI$6@;Գ&f ~8 N E3.^2m3Q/^:Aҥf4y6%kCH2L|fubMM ZM&2)x娆Ǝ~MEd N\!tJ0,vސðuTCcmO 5Tȓ @ԁoQlikY^/<=~/ԇЪXIjQ|#w5=( rCe}cG4Udw|^yMD\3fX!J1${O@ [')IMD`t,|P8אcϐ@79,hdU及Ga_'̚yy,ĵx)c{4uxj *ݑ V.FK^,tFd W/?OlG ^k +L9F9s˟J>@2ELJ$f}UMUg#bgye~6  j9>~$>xJWG7t <8x\yb = {ǎ(,(s+ů S2y=OAE ѹ(eXMی*Y%y5pW$h'dR!xI>zƸؘDE p7}9>Z$tHnU/N Rg@QL}ףG"XOвA %h!S//v7 g馼rjcuȊ~;ߌ&e4JNYe%g`g~pr2Yq(G75JV}+P[.P*οLuޫߠ}1Z9&xK5Tl;poģ͢Nqrž܀俟λ8V"}` 9lTNǥݩ%RLrɯ)o=U!qS~iy$| 8><[R͝M<|0̓6jwksD 7dy~)gB5[I۞@uE Fhi%9:.D>/$nR A[*Ļ,qқ"j3 R0 "ck/+!))kT>È`#,!`LN3W ʬHZ1lF=c?=n`q/C;7rZ3tuކ]1ώ }!*t,:IjdTYCLWLk ^֯'2k0dPň@!!W unLR glAZ::}T6L|$Ay$ve]*f;FZO#\e Sjh\E*- jb%`_7&R\|?[>t Gawo`s9IWiһ/7#Zc0ZWsk>˚i!tkhR]K:qZa`duà8 cS㏙ۿRjoɷY6'>.Wue=uTV>]yвVv^ zshfHiї&W;hGsq`OQ =!g+=pT\i֣b Z'izO#g  ;.9 D>ujI#ouDci: R8ͱA6DmoYjfQ!a׺xYж#KG㯨hENZ;HO5d&ͦRzL>/~Lo+SM|ؽXn&VfVK!꘴װN=F]50,qL 2a"Yʰ:*+V83Ų 33+-.VN#Nrjz֋7bN_=Z`QFR-\bi8V`T}aoI)p*/ 3"a7kIg!JiY{*u#Ew!9(tZ/nZ:PG3yI-F~ :k ,[,FÊ>uDn#WC,hdVu<.`וW[$hbsZ~VI3 P`* zEuZ;vǡx0+d1~QVWzBqal_O2x5 Q( ٧[>>~%%#v.˻.@zjm<|Ps:g|=gJ(4kÒTu}zGX~\1/@ D*H/nt^RvonDKG _ qT"Кw|ҊdMDZ/V͑ B_LLGoQ\ s@uNf=7޽3 4)_bcC4FY[2`:c\UȎhBxSI-D)drzLY@.2\uŋb )h<s P >j>42 jq;}iW+E1 aν9a$r7`"a駷qРҡ7]ː+[LSSD|Rtf`HM5Ӟ.1iQwe4ZŲ,A!NzRb ' [54nц )U ٻFrr-|?p^tٶVOQ-%ْ[4cb,VX.f7I4p*PT.U*-j lI'=`:Dɓg5:j{G{jՖF,X#AB(fiL.r)xqQ 5˫ !t@-DȣgmaxMv$!  8ϝѯ sR ELNog`-A|p! H F“J¾H`)<{rgv}WP.uD=2k1I0\Ĥ< &gƿ?I0gȌ8TBcTBZ| }XNZŜS:|T%_yN>wyԱ$);|+.2*!Jx-cw,=y^XqSYoAMޣdDouq|NUN%ՒG^×/f |O챀8%#0(@1JhE$ߠWO-" OՐO%,pAhG+3T+msf'7lC0#^gvx3l~-ci gptas,ZzB˓xs7Uc>vg4m=w=ݓKe-o*s~];f}1ɠ~a=YiO4ƱIA itg#ٶagYP [h+s֟ !$ 3 vÖشG7:''Rn2$^(q4GNL3p-Q ׀1Ba\5&i !#Dx[Pc([и%^osĪ/`V.(rWBA;M (QP>v4:18_K;=x|aJեPHdo[4eQ.]5":zP!zcT%|&Q yS ߋE{ѫR}_n)__ݽfRyS3J 2Ib/-Fl+gd8`L$,vaW46<9ZU{|6#v5Oz׻89 @P<E0x1xjamΤ4+`@иF8~p.%/sNWhykϡ!o*[w-%)!G|n@~!*-P<+4zUH^.+g{! Y@Y5 k{.'jKA%7-PGKS&<K..#NP:MFa8ǩ 1j3,J4?!%Qޙ8T^Ƃrtkp4|y3nxߙ&TvԎ MRB$.K1a,/( (=7_G/(32~f]Q7C(Rta`{Qhe S\keEl%v1DV78+R {j ̠;PqA홃[ג`7K Q"JL+c d-pWB c,lɥ8SpT-k`@ h篤2٠Ϣܧli'U2**{;;h #Шv(IϒKSQ^O ̖-Tb:c h=5-[GΌa;Ҕۊ ƿ>̓Z݌ b|uVI<5^TGyx WҒCG%4:Kz㏽_-ԩ& @JhtHNZ/ ޥJhtmJEVR=*s<CQQFcra-5*MŁ!$!DŽ$-v6Ҹbیy%Lf M[8Q4|kHWŘhn&cHGpa1-\>N*҇<#Ѳ XXܻ20j;v" M+b*77{[Sd/4UBi ^_- kȆYrgu/.$_Sʉ+&Ø&gs&6u9>p#fKV(%>3YJ[+ShXUd<7_ڴ3z' OW2b_qJ[JP4. Wc u$3C`+筑I 5-;OwᘢAo%hTCcgPÚ}d%^)Iv)/W !gXUU6s5?L J*+(XJzRH:$ U+:pIE$cd+8Lo㱯`s6 ܜ78BumnC[|W>le拾>\0__I=,"ۯea4m;_<>9z w>?}c `$2I#(_M5< h˲|S2Oaq&.N\Xo=`n5 `ıLs WLƙqfi.&<@?0l~ Nd:N`.`5ٝo4}5V`<Q |doirzd8RpdpD<Җ[%2jȞVq|%fX(7wYGdReRmRυ(Wf.UJr`>7OR1ÐP`\Z`x"2 p--WϦy۔ Eq|u' Q羟d9E:EFD8*'5Ƃ1iͿ]1>#QibCb,%.(:ڃyU}@3VSŒ"(syw ,|3EWgqϝfnG!E0ƴeݕ MGj$Z7]5~'Rn2$ekʢ9i2 Y͘E ;"\q$C!~R_+`_0  ljn64~(\QD!Nޣϡ&f#CM`se3 ہ7L 8Cw0X&~)τP̝k۾hçKƺZBfY}[,6Ro 4gHAE?-P=]$/K11yMu!.}@K(d8ҙR(!}z* irIe(AP% zUT-9+sl9u|mQ0p8]+q *A-}¶*i-n;lQȣ0[zƆr묗=|[֣Y?|^xޛLi`?Ua?)|_?>{_(n2>A>JΙ MfeR noO짷_%O߁ۺ(C'|*E'bw=5΂!ژ[ RY#T kn4&#=?Ί)foEΗD sdYRxm ?=KtH6Ir4giBAt$vWJ3_5݀+$gzix.͡(U).ė rkS䏕6x_*b̡8|ehOoA{+P}-/֢8/3WD_yy{/?[©7-`ξz7coWdl-6mz0jrScmv-0fMJl>SR<uI^OܭC?pӾ?bn &?b>T"Lf6GLӧ?A>%ag%5~CgTV -觻en頻vng*49X+0y( t!m}O ].W-ݼTCbͫS:k jCɱ[tt21I盵:ctP'-W H;'ܿF˞˔/aSIBH$vIOK̴:wmo v3RQ$R#UbC\ d!UXqeL6ܔ_,j"i)og52uZNA4csO#0ռJLԊׂE<۞ V=ݢXEWD862p3޹flgl>OTD8oMc2֘8ա 31{ *"@K< 1T v*9q\)ʾ Ⱦ6WG0PT1V52#]߶&n8'&Wg❽bӴ鮖k8#dz3-p="LdZJJ#&LItTBcV[aRZ‹ F8,<,yD8 S "C+%ϐ&irmbdhTB vɥ&Go,ƒ?NI^v&fa>^wb"6f:~eDQY:7KE%X,Җ]B&—wFձk\ ۼ kա]a$.q!@i@!6r ܠ "غb~غ#ׁE\CNrlrrpEDAXy':GhKЖb.V [yX_Eix[oVJ'Q#d4OpY 5P E+E d`Z OɾxdwVN+G546JH'crѥ;ˌEdV]U]ƪc޻}*o9“&0B(+\)D54M BhrW+ų2OPl" Jחnj6L!9p)F1j8G D10XIK*雑T5 t7JcU` ôRhޥGtc]]ƫdIZzb=+"&|g qFKez</֔^BZ 5X8,C7 ;z "ŚZ 1}` w(& lpMBw0nM'M+WA8EAVzA5s.<]9tDF_}5MAei=yBQ\DYkh|B/· 9jhVfM9:6 ]K>52K0 uidH{s2&2h,╻ j0^#TCUq\V.2Ho)&d[0-eCA^E}5W&_op6dCij5p(DQ!_9*;h#遤lq11ڽ:u~ UPXC18PN_]s5sN oSb S_LE_~+O}PWڤl/*U@Fzk8q' @ǴG{Mq4.SMjդ-O.%h.S4]cθ0S{~=LUܵ(z̍Jl9 3*& |:DŽ-Vޖؖ!\la<XZ2/mO;g֢%@Pc-EV{.% :U%tx5}br.E'mVw̨+# x7EM [: "G11U#p}KcE0G2zhN`bSOD4@ GbPNsNPK F7C; ˰pu5yYs%ccU,A)b y%Ax4B*!NA54^Ui2`TTF:>Ρ3n Vӊ{;_S氭UːydzpXaeC-SA!R8 ⺘ " tస9TvnݎmOI^NM.6]Ђ!SDCdLX|p.l1ޣѐjE4 jv2P:?HYYlwivAGR/,;e6%*&%|~iV{b~}my %BZTFyA nF hE֕ w9\F!^vdlu^]˫}S6ޓi2UOueos0|\ÃlJxyAYD0C긕sӸbj`&2+KTCu*e$ fs)~yZu.MpuS`l|Zf ݤD,Kn":+ǰZ_'8b- rV踑ty:ˡ;YٹC+tν9xǺCS1{~ #p[n8D3M0xWCֺML͊kh8^+Ѽ^uP: 5&*Qk%&kOS֜1ct;D֤mqwtg<~u*¤IrF`أJ 5k/?]B*N$?h{O`|5Y YZ7ǐ}uUe7LD+ͅzYYuQg|ֳ=޴M5jeHV&hAg=?4 嬔7M+uN؛]?cΊ' 9L )Ю!K~҇5 &lr4Rjސ$@ ̭:ߝm&Jc̜؇]5Z)vU +Kľ9WfVsv}i[\c[uSߕ)SW|U'xGX7IIG|7؁;㎸fg X%XW !13Wgd64uo21cHS8ZȔfh<*/C [j;AgFANK:94[1QC^Al:XTM\(8ީ4HzfMm LFAy g=D VҮj #Kܷv:0b0`ѕ f/4M)5ruoKk nd~4թ9xwem$Iz`Ȍa0 h^@R[WȪ()F"*VDf YL|ZvP7M޼u֓m EHRf{rŁ.O?6al,vmzizeMYАѝ[/>&r"U6;jQf ͭ5ăn4cjJA1V֣VmZ oܻ25çSWq NAx HO.§4F}ᗠ÷ډMp} <]g,?T+DZ@_']l' E*[Rik3˸[gam=WrZ.g+KHOz@4bZRǗ&;ЊGm#0Кי5qd|t^,m+ho'߾m/)Dm?+Wkym`>/+L󱢓uO̰u0}ґ6[BuxXf=޶ skeP!3CY̟#t$fQ٩&eUE (snFȤ=ϥYM.q҂Nd=نyul߶*LIhwz`TZ)/DRwR|UV1QC7$ڱ@ z$!h|>[n.٣+Lqj:v9oQO7$Z'VeDq5ރvs(~^?Zy c)^±S Vg߰iUkz06R 7rn 䃫F֔U AnlkeTzA@C;[rsm>6- I#ǞjlqH&7 < +MQI5Z&ҍ6 Gea:e: Ja68PF#CwT>#LU+\-P լe 8m#Mր{7U сa>_9MӛE^j>|=?\\\OUHi=ۃ+ߙς^x'a=[=C\R:l+-RL~}U:F/CY݌wCoܷ*/m$s_m Um_C z[ȵ|h?k0]EI|WhMk)bCvvWml:a|xN"~uMu~|?K:"gp֩KdcXbSDqYomkRG춞bBʤ F{UF3Q([:vꈡV8FAH2H*MY KctJwxTV]|9f]~ϓ+x2n%i'c+.c( 8^a@G ;_ #![Sښ'I9Be |ƻ(|+H`5,] (N8H5C(giD"A܃i1[@1jW(J?=L^ˇn}gNȂYjpB,12~ K9k L9FKoЅWЙ*5bL;zcDh[UL` f(,0uIE-ӧ*LelW>0FfޥS@3=tB-kgGe68 qZ20DL- S GZ6 bׅllJ-0@J[)Ā$wkȎ*(WF}AKe'*Er c؊rB((Ͻ{${_4}뗯D:[/eqWtNӴ~$@!aJȫG;p@: .qeRܧ$BBhI!x&L ms EX4f;&^OfArZއ^Oa ]Ӣ{iv7:k:,p/'㽓Løբ>ayuhg+AR 7H^ڋ̾ l~ (6F Ԫ籶{I{A%YG! >ucJ@멡!g*LGI@j#z qN@R`TX $(uJ` ݡ~=obƉDnkujKciqGMXr &&o4Bjg^-d_S+ga) 5Yj7L/N*LH@T'c6vH/X mU'i"\rrCs4t1(UyIQa8BR4/F^_Oufl]E+ {M [++$,y\ l dKƣmea9>jwwQئDamJiGag}W~1;7uN߸7'Smw*:E s]^*e͒ ץRZPrv>{Lj*+zbe,y EW\7LʽgtB2sĘиX **ѹ<;B2An4ȂHF{(y-|;!'/sƜ̥hIx&p$SX]7qF"S&Z/n$l超/'%P֛2~aN GD͍S7$ElE&K={8#^AN6,4u鬇,H{ۚyY-7Odiݵ36)Migl팝N }9R*ە3>hdhUAɿ6EqxfL-( 9vAt DڀrBʌYRe 04y{<ЏϐLVBe~Z;,C<>262~ (޽HV[| W_dKv83 6 }#LNVG\<m&AΑKްCKEF/&I{Z#hYl⸫`;-VR'Q'NHN` p7Qx16|n I rw]$AԳ>&Tª~$i8y5 i\%MlATLEg5`3Zy+Ό蚊 Zl̊]_҃G+>o$#eL跟޻[dߩ|pWNZ6gims_f~. n;X'>Ϡ/E=/'WM%} vdzt}zJpޅfgٟ''sq+ԟ .|[+'PZ\F d +ؿ8%a)ɽx!0N<;`bs~ `|%qcoNǪ v{Yt%MLú+AלF!cnUiS2O PәT,IEZ[@qΑ)P?&GU *`7D㒞BͥZQꝮ6|J њY;UF5F%jxM|aiR+Zy!uE4XÍVzƧ$) :⌡dJ2(68iͅG%sQ!Yfx:Ja@$@2̜!pU '0'.|&rArFoC\8xmZ8kX/Ze_5Z1tCnyӉ&Qq̊CvBGBK. h7hv9Kښ B4Ay&MtL #HN^0/» Ȓvt҄ƥB9'&Ύְ -Dm%[Zl`jaF>uҚiZM71p@,N}EfLBF 'ѹH!L` AZقv#C5&>%;ژ&6B2@nך:瓍,(mYe ! LI8!@%g9I1z-؋W&x!o~}e7O1zHM0kV݄6ژh%2wψ%T/ayCnPMm4_P8SH[ ZM.3j}wmmHcRdUN`0xMQr:Yߢ,9vl&AlIGR$?~_Xuz NONfR?m5o,{5~9=տ.Ra'[ݣ:O7auj [-!ڡc&oW{\ 剿]#^j6tqfsVG'^Z{;Li-jt@CI,HَB,!Yk(حGNi#X;</Dˀ{{#+I>94%^X77)=_«oy6m_6-ؾ<)zgszBGCj(okwLc s^kn]bd'T  j^vkɑUZs2u{^WXrş v,\)v!@ ޱ4mOe:U=Q5Lc-Olf"Aw zDIw>|hh;?}Lb}q07+'Y]Auo~fIM{[A2;%o=%Ukz޶?Q@J }6!JbGocޕ =1li>]̝%Gb#%gHiSq2Sq.VdF'{Icqv(YrΉ'w#L] 3> my/ġqŵ+ L*|${q0xsɨɳ;e!Ew'ך즽:R7GWKqKwdAذ5?;VAjx`͖.$GD_)sC: xޫ! >ueޤog Wan)q&p W+B ~Tl A]/$ 'L-䭥^^%Wӎt5.PM{}T/>|*c6E /|&?^ Y /Rg]<%P'H!VN~)'^hJJhob򤁬O5c>{{3ϩ闁pTrh{>Ώ۶QR>*poWk.Zί^{-b/4֨ٿ$)A#EU{ rQ/px).d>_r '2Rן'hV695d{| )K"㾉z|7{#dϐ^_+o=fr,Ϡ>#L kW.3+.dy>CE[ jb"I TJ pVH%`fVS/4YMl_i r(@#4ʲ6-a ޯo=ԕWưu%)G'{ 5r7إc@Վ)?΄ޞ|1|ޞj0ysoo&(/c";M&w3CB}IH/Nݛiߞ^epL]|~ixA&[OZ~#&&jX[(֒`YB{rO~BZjap0;koguҎ5J1zҸ> 9&B;k>P2`_JTC2"{5C3b`C.{`nUePdOunɢt) P>r6r1L8"NqSdE NlPߧ=lܧt3٧ޘ)?yahvۧl<ZB 䅔C&# ]gXcB1u(BzIoRˁ&UAf,,E:3-:dkcsL5h &)c csDebթ$E^qґBR2>Q%W&%Ϻ$k L5 LjZsr(OPqX>8?ي•YX Iib-PC)#c!H&hlI!5P-RN Q KDh6L\Qe-[Kn7VVj!*"C1V&b6)*lNBe*HE6,73NqP}8ɤHd^@x4WFMEUI& 5(̘ŞZLV9mdWתoK$i7 /ʽ\T\Xum''|\]O^G_ѻ7X͑nyVٲo;0 _j>˔ZeK,i j虯$U曔Â4pI"n*&U!lfT"*HgFt&؟EL|V.D ˩&RES%BcUSa@XKAΨQqTS/4)ԋ5Z:<-ɗ2w azC\9Q,Z i~m"P%@ry+欲KDY\UyAȭ*˵U潕EԹ<(Tj:tHU }C$*dƶudQD(FL-\[I"9v99T4(Ap/;pIbtJi3YڸzWZ{itх]o7Կmt~zrձx;z-??hv%]\R~EkO{V4 Y3H2x3Bg/THrC,ԳD )$&jBeT[/S'N|7[y+gAvZ3g~2/W[DS8570_ ¸|+3WG:p1jsBVkr'Eb6_q^Kŭ7j-SD_cAؕ]Br6bMA뼌{k@E XC ϩ}:AD#+ϯ].R"ł iՅA_{/Ka /g++ybmV>4m3|}+`Eݝ}[v ֲc+fFBpugɑs"&l-#/f?Txe3F5LP3䰘0mo*a//jx Ǫ@Fv6j!i(0eUS-*DSaq/d~,oЯMȕMmZka.#`[ >;7Wyud $f9Ed3Sɠ .JJEK 6HREP@N6 tk H}tr=?&Vb@R<ɵ)RݻLD*l"OΏD:v&vn͸~J&<@eg"*~)n.ԡt@oI꓌V-QxyYnJL.-tet2@ҽ—i=pݔCl#nVyB<+*A`y W lE~ؗlZzH0,ݛ!.KXYM%aδj4m QFw]uZSI3R, vG`W1+v(N:}mTBٽ :iYtjש89սv;ݼ$j-4(%Q;EQl a9X1ujwX[^0 ,k/ol tj$N ^ -#yj:n3iBS'B4Yrvj(r49D;7@ߔ3;iF/j=YpYѹD}tgv@eyC wUDӮS 旪tj}R;FËrڑ<,cҤjEujd$^Sr+SSՔI5%wTuהvL]OEIsV3֨PF@߉ye.sy$yԪ y;Ӏ27=ˣV.rHPbj-:k/ lʨ0PV u-o1ʦ4'[hHO+%=!A`tb4L+JmcXL=Wp)wzXZpdJ-+(` 7@΀ffXij@Fn9\I^{u6baNl<0o1vn{]6pn*2@ 0e~PaO&S4{^JLJV<'нRݑڪܤfS=C+(}Eܓ-Z0jŔ[b\%CmNu(FnGxH8X`a &ŽDi%\vDr#ҤnvDrˎHG5\ChHqk?+D1? Q=BtTwF1(l=@=vav lN> |bFiQk沋lܩwaΩ) -Ugk.5R*T ѦAj9ܝ>ԭqxN^%$餚E؃O?=ʡ.8@/4F|~ypYoSeNYn~):jyL_GO ',^Ja*ͽ՞FGG̓2 rKS "c&Oټiff;KL^4Z7!o̓fQm95fRB湖E)f"ޝ)o_%%v*ؼ8U.R4bՙ~P=gyJe.m1Ou #d QxEb"Vg&c痂t~E,Ņ0|u 59OcH2ExPqth6n(́C, BB iY'`NRUb"zFj=J)'3$)a:5)yzi^ϪY|m!d~&&۫?akac9C%.L$tLEepG+q>cFh90j ;NQ@wMwicNHӆ:Mh" SӴג@"P;x~He'ͨΖ!ja@?PHs7р)ug#>*v66oë/Hax?OUO҃6?$E6,z'?{Z߀/nŶ;L!a[J;IrK[Ji"TK?`8e~Lh&)=hBm& 1k{vw> }ΦCnk3&GwY$6hwؾ@D|b :d8Ldvw>EDXp_z!מ0.w:?圿|?ӻ˵uv15W;硭__~A񙟥+⢾fs<\ ;ϯۨ,bbp_y|6NN~7I%Jl|>^ oq6mC{v@!nk3')1͞kۆ7dC۱42SvjOA6@)c7hD&UM{\4׽oG͉+;M斢"]As Ql44RdK7RR/1d@#u񒍳r^gW׾'M#~8/ۿ+Oϐn={ŕ}W1#H 0z :R<SEa4똽f ?ؼϚQrm(&kט=aS唸`kXl)Z:%rx^~}j8v)9qXP$B~cM]mע}L+yBw?TE%dpi0 q&JڪD%VaC+5O-Dq1@jOB[huC^73p1./$yJ*<Q(U<'Ax@)9\1pc ? X1PPr7dNWz#O<䬘 ysX,L0٧1ƖB:8H:/ mk")Q~X-beh{UTNaJ.^5aUIu6^^םDB27W^u)=KU(ͼdgZIN+BPJ/G5}ŏabM4Q]@d^,P7%!HdR4q#ٿFiD~(b=lmcBdm(]"i~@a[ՉLoBA4 ɔ`Ic2MT&@&摋zrF3R)O- 3F'p.Μ)N,y3 wQwIB1Md4!6TF4z3<;b#[QƒW ) C/ 'Xg2 {tȊti&ܹkH1uj ~8(gPDbb}(aCTK-(gs/*qZ͑@]Bi쒝/i˟͘?STԅph .ٵKv+i<}UmS|&02UQ#aߧ=TOL=3/TMk  }L!v)K>OuR8!ō^9f5i\p8q]Wd gDq0D'z;sq9u $< 9rjWٙ 4 w:ȀuZm5d!=BDmEպ7>d0L/;2X71GP$U ,*apWF*s 9$s%u6m*·m.]L?}v“h0 Cz_j}9gryA!8-GJC:P˜ElL ;9W,5I^ʠQЩoD49 kN\ܶXVXO$!i@ưh'ש;cuaE߱8vq;ɔt輓9I`SO9>8% ٷ~I.8i] y9H=E 곻6C nƼb+.U2 A%Aƒ s+"+&rƍS) N=X\qJe`l,1ED:;dH,e1/nZVP ?wcQŽ )8\ʻ-%F#c$Ǧ?i|TV{r$q\S2{3YWOB" (׭%tn v:<Бٶ׃ t,2PWbGWs psrP,xʓHzMGp&}b,c;^؍4@%A#E(< %@`P&IJ 0X1挐<`˿/6$Rrc )vZD2OCBL[t&-'R*1q<ʨ5 yhɨ 6L'N#HcQPCIr=1*̨ls/g}~JI2:їgkz[2X&Fćɮ {V0(s=+\#g2mYtZX弉rkBU8i'_3ωoeh@8@\c̭upn}ڽwQfYj9dT>i.\sc?a7?=Ҭqxi=}9;> pwA.1+5}1>8wn߂؏){XqrE"k`~ 6?`Ӄˢ0Ojm&+FdˤփhSJ]LIoҼSgU ~K͘WfKRXv}r{NI1}V.v?NkEYl)y; ߍ|.Z>w9=;͘Wt9bJQBc[Kn5U_DOB0Ak\$/&QnѭHu+ k)&QD7ڳkOI7c^.j4"UEcw[Kni2NE-wj-'+fIGz .D+'k 󒠳&Zh}BQv uPjDK%4rXT9_$Ga_-fW.1ǷXl)ԡuh:޷n,=g'po =Iy;ݪ^րϭy=nR+*.@-Ў=%$1H`I-L+ lPC n"!hKYy _FoD^#I%K\X n胚aMe v'!³)I\HdQL|p ڵ z).zQ@ij(j-X.2ė hœ1©bZF}9jFZ4lǥ1;`ŦF]7ּᥫ(XzKjyDQ0OM> gzQrIqy*jgnza.$-nknlVTQIȹ [vI%Srg4V#vq/i㜈Ȍĸ+eU6ru::s=ԿуECoư;n:n;c]e/ZU[OuOO6˷~}r[z>jm d`S;R!fsj>i蹈#_C]qAt*0E8jhZ].|-E|&j,uf1KffWvq&ExoWAkzgfч{xD;ԙtQ6uUXƎ1w&(Un\!Ί XA^JD>G0ҔqSK(iMii$ֽc-ϚHBG9:;3ǟ{b|@+AOnpj9"/l;u\Uv~?51v7uzofuYa1OC_!>FAuL.~JvkO[ge@=!W߮?v58gtxցIȄ pyP9m;`z&L^Nkݧ:`ZbjK`J jLiߠ样~$-0!`m5ƺyə1vbmsՄ8rUZGj`].J܌y5u|f.4F:\kmBAi|Lo: cWݯ|I5y5uD !P z|[K.̩e~ g"8}9܁K"X7!\7TNMpp@o*U(2.A0b3 savr*<#&lwA$&@5fosJИdvfo*W1$̳@kUQ;=]FvZ%)1mEJĭ[vG6j,Qoi[D<'w-f׈gB|9P#vwA\WN[UjCfjUۅoؽpbhNcjE̳Ԯ]mn!fkP;DpM֨]JsQ "̮F-"{ΗQRxKNm#MBJ6jcJ8P;B ؎c7`*E]Rv9Ҩ]vS@m-)-t]K0](?%L5ɣ-|%.'kA[RSGQR06ȗô!mUH- B!5uoxdVlDC)Ԋm?Fcr }4jX/^2jM4su16քn57}6B͗ÌP sѭ"_؆9h)lB E|N#BDbDy6lԮQgQ Z`υr8i.Qt)lQ;x!T k5jH[-N|uv P;㬑)+}8Yo r>H<5gYٻmlW vk0.:A Tƍc3o{(;lɶdى=fQ#ysdY$XH&H"%hd.IY*u@&Up1G$ZLʬ=7H b_\pnpIiEH,aF0JPZ\$ .t\W̉/Yy@`D . q0JHT;9c,vlNqN$U'rTjTjQU;QJA' C V<֐npFwl^z팇Խ_pѭ_s·AoeG\dx,Lpxr&n!aGؓ"3N! Kܪ"v_aBeMw$~!~{?FGz]1uznwkO n/j́#19"5REP,1 ;/oPDQȳp_+(@R:԰Ek7^{'7A+čѰ^QdM%7'֜#b-R Ѵ+hZBlL1,Bl@;TVYlifDpFsr*o&|x79FQ ww2FH]P\HN".,6ZnKJoZV v*u;{)B!Ӝݦ} (<l=֌>={O%XG܊Q/DkRq GZqWyMzYO٭je*M28v- cNUL|⣱} xg΁\6BPBAjg!6P_lRUCC0Ԃ1e=T]wC҅?'LHb8譛r? -phk3y>)( w3AEВ(eRn]Lҵû:VHxR3&k%N;-V֡6mbvVŢ"6Es qC@:#l?!Df[1ODgҵZ3ƷRGQ>sJ^@J+6(-[4P_B!B̮ϡ^BdFH<<<]kPi.g-n9فt~7|\-R ;Yop$82Hd*%qVQ_q+Ԡ5D A(@"O!i$I,Q'áȄŖY o–; Į3N-(vԑZ9!X֌ƇJ{?/wjAf*\RЍ{ הZ6 B ecDV?͔OtyplՄi ="v^!ZPt7#()u1,:]FA(8xF(8_ B.ހ8)3r2bC[ RR_{Qp40 %8 _b0i1Z1xםBTp+_䄟jdϓ&vA4j %IHSfqd<x ,0iOR`ጝ@I-peНD=_Azkfo.#8SGX#ta6C&T9F7RIs&fOxA\u%;z-2̳^+>O;|xQ[oqM{k0xo}|*fb({ݸsJFcm |$Ƞ#)+9Po3{vNB1S} s} H>IH.ROP ?M֠*tA ZBKBRDKd˾BU+n(g 5XM4S|SUdEh5S*8!4 M\ӕ^=]Kjx~,(¡2[FsQZP4ѠV7F(J*ZEp*̈́4& &B)!LJ&)KyFR'jTPI@R- Q)ܵhӉ78(.U 'VDIH- 7%Zt.1&K8e$ LRnjDS1nu@Z2|'woFMa/Zt55[m#-IҰtԤfDXҦP™ɧ`s/.9<ƃN=:z-_e[ ݼmi ۙyk?o]Np9><_a0黵׾{k^k^zÈQ9*im,e"-*uc*JY L,;ռ xuS@y1J'ܟKx_|Šyh϶|ewКҏ^b=J֔#%@9YWLPTC3x>JQA?_!@Pٜ{k۔ٜu!| 鲎e mPyt6]M׾nunSPќ%0)h G ͩ5>牿Gzq~qtһCmdݸDWEc<٩7ێ.4/j( ifsXh:/ZHY06p>t.}><^=K}_t)!%ǸǹᚐO3v`J;hV?|_fNb82H^%q]ng{5Y&1 OsgE5c15$S˟R1C2d+/>^`oi _{kG&ޥ3gRs]èc0XuΛܶmwog 9"Ӹk LYYk|mI.1xXPP@pZXilbsceDFtg`-O=9|om^1Diիm|#sx̃r4i ;-.89|~vwÏ$Lu,y j]ƲlB =s[{a/oqfptG_,`W}L:TZ(b?EgfKJ7Yeni汞JtݙGz6gl4 /(t/7lG6f7}6hDe&*T.]yv}eiPh+n]/4=*j? u/`]oޠ/""YfkIn_=]-irWEYwa!?4%AÑpuh2rG(b 3j?ģx޺ޝd7baR6^eX׷t-hHw! $s&RV3rwS %f92hfF@(][n4.M }7*Z^PFTYcƗe]l8d:>3 F)4M3ys~mGSII -n[Ϛ?⬺ IzV= jj@rXN)B2&J8Xh`uJ{g// a?KE#I6s ^߀`Z,A*-L-Ɋy/Y:Iv_ =̅TPxX ` B%]`'׬QFTzsQ~u^凹r&9rWU⡌S"R,$SAc#DI<)J5c<2uR S8G<-X(#moKB&Q5 LAe5q"Ix0i:oD\~'0*t! mйd{ ֧$}:Px79@ghJo&*DmQ% 9fqHcxq7n ]$Sy8wH~a]Yq濿6~-ݫyq$-B3ŵt]k `dz,WLQ)mwJ gqbܻgz8_!20MվE 7&KD6c{nر-ԩ:<s>Y]s=N Dns:^P?8at3NvYk'1S-F}p5C4DE(hLwԨχ7Zk2r랄lb849o6ƢGʜ/3Ђ#-W^y =齂Pm޳4]l47]SᬃH.W@ CрDcdZC%07J2]f^ƭ<ObR2}8G6ܓݙd ZDh8!QDkd:쫓so{Jdqqu`D$?o XEWX蘔`|64N@cz;pl;xT)N :W}d;c_߽?PMg?gSo??*Ծ= A9Xl\+c8p̌yK\6_%Da2c |ʸu072*MZw7\i3-1\xތG>%О6NjᄍZTZ]L?.*._/Rqvz`DkkOuR{I$'RX5u6Rr"c! ʓp,!#@^ 2]j2%ū+YP ).[~7HZad<}(8+%d*H+ʼbZeѕ ? kp!"Y(#@ !D-a*)H0Aᄑ*` 9AN7U@vT@1U"Hbݺx mJ;uA--+ Tp0 ЦAr{41HL*z5~ @I}jܤgH| E8C" WX2ziM!*) ^"CډB[>R*>RêpS! ^0~^h%Ɗ 40=# c("#rFt} mr;J,b#ؓpKKUHFi`Nѕq2VZXbvpz KP Spuc "=Iw S"`cF@e^|y2Iw'eawH,a#<-i$X04PăLQ[MXbNX*Rz+*4]RBO ݽ $b"}L yO!IH>ֽ`+'] αO/Ri$&); 4/ёH^14M+DguXBTW4HZ.n WC@Ci}V0_2!Оw< b! "SG%$Zzݮ,zr[ XT/_/~RI/Z%.xw-dY!|9 5|Ϳ6Jhѱhb.~٢0<eX>1oț/jsMQ .6jڈ+.Wl.q|;[\zٯwo, iޔ$Mpknz;B2UoGK?RODq둒:s(0 S+bJ勥„~.{# PTl7 9_L`qrK?vgYh@0p-t@!d:ϛ&YصiHʚ4&@sVI ynUᅮwi=F,\JSL ޫK;n+3d:L׍+jw1s^Rv_DxII$v3mt3^?+^Te(e R(e YCMB54Ġ["4 Q{} 9״3Z\Nц>w-Et!im;r7{笨3;/oxO!V\[Q1C "6P@s)x :fɂ',?1?A0x%~,y'I 1SOf^"BDTh ꅈ{4E(tVS5:}+XӔ)m 2Ha5\jGh.}ykfoG>|P~rM^(x^MK}:d(HLUώI 6.V]+UgEc%PK L+hb]nËެ{9[9 4zU:9JPѣ8߭7JděhGD: T [#<[ p q!ȂqVDcWI5gvXCY:s^`@q8  I } u({g%jS(g[p!BJ)2bզb& P64ɜ7'wnM+"ўw<}B3EJqr.RAA(9y;Y|{Lr|4~ k2^ e.n_7/[R(R_w*9tc؟`]ԁ$C+1GkS,5(8-d;J_TT*}ykp.:x{7PBPy:*yW> J/uة8^^6yȤL듻_ ?۱VA:e\:"WR^hM/y _r^>_k (Ae]9SX^T]DZHas='B _FҌR$(EҌR$ͨIS#DM񼱆SQR(Ȉ`uDt+H̀ɗX}WҗfeEQM]F^t xB'#xKj Q2Fɠ > 3 ΁ e,uV150) DcHJ_ޞ֨߈=d3AFHcN4ʨHlVD/,[RDaOl(1.TM0ܗ0,U%jx~fvY ij HnD"=)Pw\Y ًRs)"RD1^@,O6D=$:}U0I'x;(PK(3o@+35r9RB((%WXxA jPzb0 d!+x.҂mPp`RB7^yC" DC)H;59rVhqp$@Lmq@i":F!aL pJ$ \j"]!|wӹ4'w]QkUfƀuRsC8~O1SDw`2Qj<~a;q܍b#J0[1\@uNJP:9 Ird6)3T{p0d+ПReJ9H`Q=,P,*AUuۛ;K3:Hͳv\'jHnuv W]gz(ft#9kVupz\ih{c8>٩lTL4 rʬ+ Bw]9)+`LX$H=y/^: `eBظ &b8,t3 Ϊ= %C: 6jVfEf|P;հaLJ` e/ ٺ̓KփƑqt F$`S;yᱱN.=fv x @wsurh^zS߽߽ kI_}pd%F2@ܘ>w î͝ /3Ew&@lIFR^UL/=_/["sb199\P]{qЄ)#&!*XzD1lXMyjcí=8Dx'bgՙ])T(Rgՙ]/:Ls$g;7X㵖 =/tp >FT X y&1Ŋm:0]?y]XH5*%i3!Q5n}~E3uzg9_&sj ݒwJF3R˜jloֲ3(joF~lמ~0 rB`8:97LvSCf>O T9y>l}ןߍ'Ov⯃30fC5t:,$ymG0`OJ,nSiנ]?%ydA{&O!FǠSF7'x`vo9߬% -$ax[hSmp'kinvv6?F& d7QIx6EeԦ8TaSL!"*x vsk`E1o^^C6[HFF v#B.QG>ToH}ln $}/Սt#xu\quޟ'{]lE}/W1"9gwmvmܵ]+ܵ@NvtYk#CLT 5ps&:;=d"D2]e2dsQvp.Ž 8Qšɐcr}cȣ0_g9ww\ ~Zo>,I߅oI./1t3T`D+t}l|8wwibu+Q. ]`@ A RĠ*pۘQl7t{"@ ⒐Gfp+hλşY! K,G<̯YF i/}J_wݨ͇kD;LX2W` ɊD*f][:Ap0-]<5S5֌.ۍ y>ְÙt5?Pvmd.&|pYÎKg$Uڹp_E r  rBlvqѸ0dfe͔|Qu=sN1rYݑ$ BWJjZQb6 @iVZCYk@ z0ƀC[Zc瓛:%-2Cb)i^h_6H;?`A1cif7#5jRlwa嚞dz=fpbh{a R9mYib r(N&hzAKri%EA%)%ky_ȝyffgfwg($HTd%:Xm ^GdF GL$2N`rK4 .=TD4(4\%Q` Eh)'MJ3#)S9IJmM ["T2QXinW p!cH$Q(`@DhnKQ %-EX$1' D'(2ıՅC6rE\U,XEbMN]BF2-l@RIylV $ TK>Yӭ.Tg0g7T]w?oԛ1`oBitu/wUm.'&wh5Wvm+Wws&Na4H7ly^`<{STC ,$zӅl.hp<ဝJ f Q5Vd۴M˱RnJݕ+M+.EDvV(i]eb`|fM Pp7s4}RnثWGZs.c?q8 (ʄl?Si8o)i&L,#E9hbK!r& a9&_K!X.'Yݺ&Ѓ w A*pXԚUlcaeϪ1]6r8MV;&LVKs 'T!SNJ]+o[ T;|4{v%v T ܮlF''b6HWb84ZM`.,k~`zhfvnsF Uno5^CեL&K?<=VѤK`%'wIIRBvKhA?ѧ"irgmjnr$Â+@#rJƤ໔kG tFM\('k{TL5p}h]8Q9Sˀ<|*b@9ۀ'/&⑔KQKM5J eQ̔4rTSv*4߸9Yn[M@-7 ])`먏NC 4ue&GN\|<Z+%Ō)H/LF#pI45SmI9Gx9{r׆`j7eNɧE |mJ{#Ԅ%D"uR@>XjQa*V(,~5{Atqod(8(T`Esw{~3m9J[c*8? BЌ6?hF3] ilJ>w]w{M]c\~֋[q,'=sӠM[sV(~P =H̰n6Ç(M-I횋v"5&P}w;X5fl{|Qbb<]pCe@ h;`+.zkr]DCYl,=U1c>myǓDZߨ}+)}{-~a \+S8}'Y3y1n[P|zd|;QRSst[P*,no*zZQ@JGV[wBY2zG{hzF4&mXيn*7!D\iQRr%Droa0` ?Vy SKcˆzKC6#И-냣K06m8J59u5`xjvs6[:y=DA(N!OOuȾ1K$w C˻}Z%'Q.%7J>ga"*ny5o6UUpRlcSyL1P~wL -cb/!5 JT 4G7 'J6 Q Hv˷u҂(%60_])Kq́gs8Ov\,ß&b;VDk'Į&2 šfZIPlH$h* MLP-D1,q^b;38UҊz3;VX~AV|ntYoq]eu'_MdLl<@8c얝+ZUA"͋ gk b"]Q߇e KǘQaVd9f[Љ2qa`_Ny S?\L CY#gglPw ݂.-z8t% ooRd46iBl̿}q {X2{8&Q&7+5| t=/xEr .MI.2:K}8Ou 3|loOgTy }3Ogw@ӟG.|>7cr]1(@__zTo_84f|;pAFR4r~gf:}`txidȤ+}O<ہQ MO0JiqL@t4}I?z4)^rv"o_dF6Ue epghFFLR- /u>ٍ܉qQb,b_I6ԉ`'N𿗞@`0LHJh"M0׉)D$i a\QwGGhm.w/ywt}-6Y}ĥ˩RxAb,45`ls5.L~C۹-A$`/jcR 4 ezk^\_"a!5&pMdf1/a0;т+F7cN@ʯtDs-4kh%W*dR?ui(A 5$дLo܅ Sf`~%l(txttZHar 1w;;!W<bH[ޔg܃ȜD5+ǃBGj&Ѭ $ǿ;Yg<+W}ԒK"[8oa֠ Nߙev>f܌{2M#=pnv^鳣tUmOW:x;}Txezx'4@SR>+͠7ƽ;8Ί59&6T)dH*sl澑eA\2PahX 4 0@E'&QBPd)CSbzgdBJ9ώ 4rI<fT|j$1BsLv'5YCQC}q۔ TI?!VR!CaR*N  j0akiBkIx_ 7ٲEoʏ V q?8>yDY 3JFz͸I|ӻ-k }X)J؃‘&@h yL<1$qf˛ Իw*v}3SPPNeS8}RU7B:}_U }H;}ߝ?QNw[O7Ӏ)1n9~OL%2PabpOR+ ~p$쪪u7[gKK l~^Gh7Mԟ')tp6*C93T5X:RDR+ҋjE+q{*6̡m_V[ SdŜAp|)qB"X00@~Pm[CU!~+33>+3-Dӥâ٨%ؕ1EX$1' UX8X`Ɛ#,"&A\'1.A\Q20LƔZ`.eb??t.ebv5`Z.3% cP5$P63tERZ|ˑP]fQ5Yz>R [hcFGFǷ^GLDTbC I,%s+l(a(J2QltXYX\*Ge',l9\@ȪJIDY\S:Z1s d yf',Dmr%D !- 8i_$:C i+JGJ@-zb aJ_tP}[&5bd[t{q!>6m#^6_<$fҎ @[-$9i{s,Q;Ƒ)]S0> EFmea3~47E |)%xJ ކRmt4ԧA?!OA*BTxF<fcVˍv GpJXSM^^kHvU6t"|i%N_ͳ69hRP#׸ .wE{jL=4LL4 ` ZE{.)6"Tc20dkaq1jyL yz_Vў<#|9lR 3J79Oa>_t"YwO7(ǜ:b nR'w:/Y.zF1;^4+Խ:#l8!01e ǻ;w&V3MIcR[gp̱1JNebh,B+ϸSȇ@24&/V#CЄ׃E88ijakb㵌 ψJhʰ $-g(QQi+S)S8Bɮ3C™h)heUdl&B1C*-<%,5zf&LB.+\l*!)4q%OVM;ɘQLZZ-g?k))X$5 [E 6X$)_)e;I5lu0rz<*aylS)<<'FD/\~+aJJMP!s0R%)2:MA4q'Җ"0{O#Ϡ`&)6PJS:u\2k(p+a%LD'J+&^H& *uF>#VyZh(E)xJ G @5k\%Np 4# a A'AL8SC}3)JB 0S XFa:cRPR$i %I,JPBl$E0NА33(2ffՑ8̈loJ#˜I4\}x &EpHQdH"raNY~ns` ﷁ^6p\vN,4;8jo߽󬿄R!o^Of頋xS//$R,ex- L%ZL_O>3PȻu&:w^LV=$Bw\2x\n}h͖q_%ב<<Ó0ڭS,Z ;?ǢdC#y@Syl2O k5y-%p񹣐؁y)VL"UvFhA9/[?%BV( q8_VGh->Nexi+b`lԎ[RY,k2d+يղ0d+يCb(LٮDidB 13xlSc@L+oHWjIun>$Ǔ*,!+V J1hG}AN/8$5:ڄN9YC`yRF+?T8k:u;bS&WZ<|TDwmq!ּ{I'+aXcc&OIŝؠSetO8,1N9(̥"8wz2ΤrKeLS*40lo`^?(r)$JQS\+Ű+"h}եۚﺒ&bO% +9"tvs Q]kr! "ڑIIpq~B{: ![KX@XcR@X<= m9hn"N|k= _4G^41޾zFwMP7DCt t~2 l *+PAFn (}3j2 f}US9秕|<ˏcΎZ^s5yʀP[@>O!*Ž:]M4D9)gC`[.Nt4)chךgPJZIf)ӌr)c =`}"_hNI۩\#φm4* n4G]4ugXG><"@{(ٯn:uESvQvd#'cjEndwF\jRS#ke3V3%>VŷJ_J:d%/S՗NUo3a^,\)]1w:yPY _<*[{9s @^'7aYSk:ҒW/CUcOnTAL^hL:DHo:}KN[z?!D ܶ^zzxP$}sqjO{'4irxRۢ~/iՉ#_p oY$PvN٢9$vb/9Vk* ' LO .0uE7I Wxv2m*&sȫTHH#"_Ǜ2~-lWB~V&.v+s2Tۦ?8 a28L A70-)Ah3D=t[c6Hw")Ow8(HaxH .0㝮CGh %Fs8i5Tospc'!!H+A[,{, ďfljXp+a}vM7xS .ke5;VeO}dI|͌(?lL-C$5%.,fؚx-c=p3*T/m4P*LG*e$HdD9Dx+FRfLL# ,6 E#AЙǤ[M~s,sK'~ hE`4Nhss#e,Ʊ& 긥D)]`Rb zqJXj1 PQX!85TpC,SӨR!MsR_(> FVZ2}#KT츼DD?|B_o A{ jP̓my j\(W0(};?_X{X{X{X{[d"Mbb9ORF6)X§)ƌzra<\!!(bHx~ yaq{ (eZ޴̟gC,{ eޒ@Y^@Fq:Ci˴Y)_=xMOa._j焈қfz0We*%V^-ھ8xRpoWܔ7U-Wܼ˃3V4-7uɊ;vcy>?9,~:#4sE`23'$FMdJc)e1|bnΥDT#LWKZPH-AX-=}+TO5%=1!$TI81g\LS4:v\9=߈Jb4X+V bP*)c:CI8Ns0fv&bmU?M8ެ}O'74H0#)I`䠯 R(SHHHb#Y`e0R"g3֥3 9LyŁ$)ys(EREre"e(8@?Xpv7\X/+͍hj j"Z8hL6,5Up)؀<}&Xl !0 lXy#!¥s:s{k$O"hej4wU0gڤԁ5fUiXeZ2՗ K![U%bV5ݴ#L"UuՑ!թ~NƪT;4"/Q5]gQzi2kZaQ%5a_]ZZ)&C/Pm)8 oHgw4̓$Va_qeoZ­ +!Òd} 2ͼokn.r8۬ػ6W!_&ϔk[ $e`c!V[!%x"%%fMFy}aYz t$ʟ]<Gn`_ ?xru3{z!~G?i~Omp}~+s݀BOPHKRr*pIm1o8?&FTI\\h0µ[Un'{|!tP|0A{xILm%GfwQJ  Ziݸ;-ZNOgͯ'@Zy;vYݬR]l %'1RZgL-y V[ ڼp-ъ-Ey ¯0O=͋Za6=UV> j֒dBFBnѸׂ i⥅]邘ǯ/~;_|T|[1e3p`Ad vwtnϗp|`6k< -i0}X ܃[nGPqk7}|~}S@Bb٦5KqѺa6w>'魣E*--GZWKH6[|m_luGY@s4hݞt^z+u£ *:  *]$}۝mNIw}x5vw}w]A}~~bs8}ñ莋 Kޞp=O2mՎP^~z@;Û}[9CٔB3lڴZ|=>vj[nv]nZ=u@R3YÝ|#ۥ:_hAqjwț!vďp޼][6MW = `)+cN{ds9mA7&f1(+) ~n0nqg[h@)mݑ1ݕ~umc_q;)y\'%qsB%y4IEYZ_E[I}V9I Ou$)zzZY_,<<$!Yl5R  ,[gI"'榏:Ju9 [a໯ mZݑvQһ|'ZXCe-G[3hV|z6#,j-J31f;k=u]_mHaG~𛇜 >{h@vwxivvgrJڜA ׽~MHU-& 򺶦IU}/"gԂ+]F ?NgY7 lt%cj➖㋋:Uxҿl,F0tQ{~A1Bm\jW}Õغ9YZۜLO7%9@ƢZ2qi~_cKgĖv\cOcvoG8(юهUP+O̙GV&|ubB<9-85nx@+g P'($3hL2R6MmlƦ OĔĂ1dX.(68") ºʂg1G662֐e+*JJz!2RY+-Z-Fâg; %jZA)$مbJjn~y8 :îWn΅tBVeoOʗxnN^Y\HbȊ;q6ZU02DM J9s'^ k0Cs1o#uCfVΦ" fmofbofi,cUBH!enf5 9.)fʶ* ;adPF.g16xLCVDC>bB[/ 'o>:'e?cDM ; !cj_oN-UBMӼǘr7~~Ț6"Jh|,ل;;\违L}x*dAָ/ۃK-J ~3e%K9@s[@R.^<NPʖե0XR.Bdq" GSM{ui;d=< Lû-"(JJ;1h~ Rrd^(%LALzA׸o RM%ְBO)% B**K+*X{:Q>O'W%OEttpjC174ULxOҍYz.:`V'חV֜-8MZr[U:HY\vvRz0mSY$װNƫ6v\jo@ y{^aWpj삀[DN/Y}]( <{sRڏ\m j VV^E#B<JDmMgArRj#B1,Fvۭ Zy] $s[;:5V1lFZ[mꢰO:Bv&@?u < ϶U4uٕӝxug23~;9=A?ˠ_lwiޜu8ͺ !~SW/ &SO&W=W=C%=?W{g8!At+?֞>_">^,>F~(3>7~u}kFW %ʟ_'uFF_&<\ E{T27$nje†g`RsX؛HSR6sKBY]ԾmZWXJrę esN;w9@IJe.FrpY%x'Xd9KhI:'պ\r]bЇՠ`u BK(T` zAd 0k!TfХU$x 2)+x7~EK񅉄4Es|ɱX#;B9iq}|uyQ Aiy\ 4 TF'+2/UUyd)^ƛWkr2\KFz Chv aԙq;FZy,E똂8gV%Qtyrh<_{Ɗs?pݧFVF*BMxV\EU'Wb ?@*q+DR芭B#\*6D; "aBP}['e9eDbQ{[*2=btĊ1DpSz99 ͦ>cNt\@#&;!6XhSv:z-NH`skլ)$ص_FmmuOb #mA$`I*TW'&B;˅]yI%X7\{DDa'!CLf;!⋴EΠӳ` q5-%!=z,g. AY}DĦ Vr#ʈ[$Eh#W'KZWI>!f%.I SuY3߬|fF)QAld^Ց%U پ*䤑ICWARE|xy<$R!y6l0L ˈ!N˂^Aj!Zk+X'DIgHHT6p+3EJ("ĥfA @`C ЀHMߜ9+Aŋq>-h%4I&3U@SYsTeJ8gR,.IK1wP>?yrd*$B̌{^d@#U n $RBcu3 `X$H\H[5&4بL^ PZR!!~hgDѝ4l}[ \rLdo(n*F)< #."55_PӫALF.#D^ǠTDj!>dGv-'Vc(V+pDMMsBBQ!V3d@#(#-%ٻ6+W "UELLx!jT=0T"%Rl`Ö&ު{Ϲ]u hCAV-11J@pk/   Tx~2WXknuExJ#q

oIf <@A /Ҝ"5а"ʬ5gm6 ˂ygGjDh4,3IkxMJ(1hf #XaY9iݢK+zKUZX$'R)BTt=dhބ!0v [_00Mi|Œ-QAU 3mF$ZYj<60hiM0d̕_gbmG@ @VZp1($;77Vj/jcA?=ƅR E$|GDN x&R€ənnlr!H0gPǹ!7 Q't_ctHp*eW}@abbI'!a&/KA`?nwpsɐ@MB`y̾\u=!9|tIVn:l0*(C":X.75 ͇Sy^!w&X hYBz`s`@1bX  S^NvjʐOh2+"6ceQ:O%1h# 9KMeNmhtonR[Wm#4:L QxC 2L@#K ׀n;E80#<$1ldL84Mw;Dk`VX~팥Qi`4X;:p31*[hJ/3n̢ڴUakl&p0)A/; Ypݰ)ƹּI,\< S :0 L= #KK=xB @Fik7e2,s4VA"*DO"0$EELYk*Kj#F2ڪe1Fy[ 0'eZ1,ɅOBGe2bĐVum6%~y1CY~$19hxDlFj0-f=($M`6$0NP !ĢS>R;M7[Vq{7^N5N8nN7۞7.>ЋIT-ͦ(}7K Eh,)L? UT&)K/Ss4]]4iZZ ̸,ZR=h-|5t~*뺽6F> )pDkM.͢}q}L l(C)Bڹc2cѽa۫k4iP^|u˵(S(\kE&IZ*⟼ZqQ}#b ]˅k*hPɛ9E|:iNyld2kNY|a=-Mz?^58dikƸ?7Cd0:SmܙÏ)_Љْa TU|o,ˁkO;u A?~S[FA즅KͥvrkBK=[D@X9i̫IV^sТ۰Ru{\4zv/h)EfR[Yj\Z,uR]XrR4[Rl~5TA2?Ÿu)f]|t_N'̿dzƦ~_oZ]ɶ'2@ n x?y5b\bͅ5~4L)ËkYCjYiYiYeUUѲ*ZVE˪hY-eUUѲ*ZVE˪hY-eUUѲ*ZVE˪hY-eUUѲ*ZVE˪hY-eUUѲ*ZVExZV;-Q :YK^bUvղڥ\lgMIHbͅ5iW22 ͤF׮̵q m11\fTAJ7WvFwm][vwrǢIG5tAkQ qBhg_q_>o߂+04R /~5?ER?-}.߫__ ?R7Tq{9?\Ywg{;vŷ=^i ;`-_;1(,1/_ƹ@LlKϿAxכ\6B`9}]6`/_5nO 5I{~ Ajk|=ϣɴ}O$/{]$ͤ=u\y>kL.GyN?=~z?io޽4I0~Lqϗat{/7 2ذ.SKS""r#15bm ld3~.VM0ѧ+ *f>0vԍBaXa諭GaEu]\-"O+L82~YOS 34j>gG4pdv݇KϮq6Bx~j,5`qx`2/ǣ4Rfun}7M$zՄmT{ImH?dw|kѾm~p`ٲmW_{ʆkuɥ4:e6_kv-񷣏j2 qݤI3]{f}F,evV/oͻmz~Wvgj-֜ϮF-/xro-|WM[f=Exi8˼շC?L= aކx@}@؇!{uG o./m\KZO6 m47!J[W;Qx]Gb5qI<+s}wՋo̦Ǫ_GiҼ uO_~FaΣxm; <~sǞ fvru u(]hh..TBq߯;I y'Zyx??>6{Jy58qnf0XX㨏5׎W[v_u -8Y X& J0P0ML~B#5QRfqT yafI譈r: ̊j)Qw>ŊoZS뵦\`)#Ԛvqcmq*n<2>5tˣCr࿒0׻:[VS:i&CY'|'ԤPG"ubR#f>Cсc[=I<` LPHY[U.=yq() ڣ؂hO NrVpQx !_eo7lpNuGZ@Hlbi%U3q~4tPu`3$Fx :|x#1 'k_'eYb/4+9 A;3&rXBI|XK*\?*)<+J)N&SHKtG'rQt4emq/ƉW745Z7~kxf`Ֆ⽩;$::(}qV-JyMB4NM5gu-FEcV%999ےlmkl=E-6hѤn:.ZN-Y&i)D U Q(ZGJ'S(dt תK9q",z,ѵN$v.%Et9!&NĠOJd^pJ fSa_JJpk'r<` lJ Ǵ%J2Ӷ[rV [_B&U [m8DKa똅-.ᥰuڅ-.ۣ𥰵+QN]o\qW(Tvvg QM1Z'A_m6ɒbUyGJ&%Ѻ=k $xog>rs֍|9s ڗ1#Ҙ1A-.ZDjq"ƚOZ@^iW CsMp!K18ˤ}2hWzaHq5Uz"è$ɣ:mh';ԷkCV5Do;è$,z16r֊ W3<͡5bCL׼wû2vFȇ40%$pb2Glj=gћRK9{kߡYK %ab SP8ZK6RtvsZ/o.W?|ogs.BST*e2]ނF& &B &.nR)Eڰ26\YDcҧ0JbAN`?8Q;`8;2l[)lk)[8!a  w}hgF|Ɨ, b  [ʂ)&=Q*ֲKќȇX c'-%9[2ڢQ&t%6v0ݮXsotn988|<.ߎC`$-P,h;T׉v);F65jndj9"rfKc(YL*s 2r$18spƩ@f\>J?ye]L釄ytkDitYxFe~Hƞ}`eO g]oPcduϽDdRDvLF=l{ ZAKQ\+9sr>6-i׈#@/,[Im;}&*=pzKC;Q7H=mVLRNZ(XJsҮB=StKߖהv9}qv}%?Mp>?iԆGg߯~ \h}[{qmDsxrk6lj1[a?(}#?ɽwOJ}c&:5#AЍe62ű_u~ng#hŶng#)qcp;[د;ng3Sb<vOio]>ƈ t(aP71HqP72ϷUt d ;jsۥb#ZB%H*#ZB%`4 X/ ) K~<ɋV;uߏOӳz=zD`6Ԏi~K 8]=([^P{|tyʓӉJ!/_.RZX~IO|./-T~2Oh_C>Ds~8zfoq%?٩>af~wvyudkcUzv~䝧Ɋ7#̵6 #p>$>Ϟ,`c~v,>g]zGӓt꽯4 ?Cy7rJͧ5v5B-CfM: {gF`1A<ߛցdIrTB蕻PAn=>-sSΝc=Gmj{ύŽ ;/ ld q7[ zp3 cG> cG4Z/-2ߪqle tj|Tp.ࣖG5|,*ڥG- G!:}RHA >j|T>vG隦'qnM 4=% yu^A%Cu`K61A1dyFFqyFP70,#/1:6AGws(A"8yžž~a <1g~/ȕl Cᢥz|&S N'papԍ==2M%s|qqacyM=ߏON08o3WnFѷ_}_M?^e=kW>cyuz9}=;xb^˿}蚧_C|mz_o$]^[VgNkz`Q~o~cWSÅЅtlY9?|Oܡ 5'(kKKI!(I)B(+D@P(v)dɜOwۿ#(ި-inZ\L{ wMzfh%f.iFHxmKAZpU3R 9oY4 { +`gW)M=ʆ=M)W\{R딗jl2O2M1-lAŞ9)+-,+HsJ˭C8Z5+\CǨBzNL=+VZ# 3R:c0 K=fϜ<"Rj5|C"$BKaDa56[J T^E)A3J7hhJw  DzAڼVsX9hvBZ3t'ݔoh|&󫞢(Tr79BK)RJ^0)dfZwoy7{3rlx"F9) B{mGM\"mb!8rlE{)9b{[![FE?{aT-NĴjKvf&!LImD(+YlMܣ&w(k*ul2r{Au"BNށxnYe9(Ğj:59T-`$I[ \Po Ӱ\m\U֣gs ([لlӱ5  ]`6%>*\ـVUW^a͕ ̊=J(%XbV9 ӡ9N:=U 8sd3̙z)`V͂-tyC Μ55"\Ei1l6.al5૆>),r Df^!Um~c. 0u ?Sh:ްDY@ cj4$=ѲdHeR_-y9:̵mluJOX`h0ZS-hhh'ue>s5&̋9˽]d,DP'&ω C2_\f ̍[ iўz'}ESnzeJ4XZ ՛ZJ-:,&NǼ3bX Xrs kф\Ŧ AJ6tCNj5X'9۴^+\$̛5yBT;\2{q !,vAv5I`,j m` v#c` qΑj䊪M mƪ9: h ִ1ŠmNN PHeG`R.)h#0t: $3)("2W}@ĄD-;pwF .'0CisE pc5 !d'Ff6E/BںJ3&0 ֨p `f1ěTZKy3 x+ۭ" *H 8%6ӤD `+7r%hLX 1;vat\N;x0?X-ggL@Iƒ!Q]"uӭ2o&&cF{,8PGuF(J=;8EyYm)!㺂)y(땶=7f6+&%G"U[#PW% NkRGbk+9bF1 ^Ǔ7WBd~Kav_] }Ή6v}&҅ov6gM'7o|R>r;_c4[MkpS;ZUk/HXs)w=K]/@n/@]/8 >p-H;r $INՒaRB;G]/@p C HX;ދOAnj)30n 0cgOV9\x٧qZ=\ fS>X(6o JCX=jgC%5bŚ^=?p1?C({^ j jg@psA6YRv 4ccReg /"^qJ{S>xnCQ^@j>Q*Rp}H!8D-}ZPh:Y`}/@ن@5k\I5!!/)_SUaUO +}ʭN .c?pUP41g){^X>`>@y!8P.D;MA (f1YEQ k0p>Pp (vۈƚ{~:YH; <~ZVyɿ OBʋrWN{Vun> Qv,#t)fޫH޺ώ4 ̫0ɽMAG -coɯ$|ߜ|}.ߏ/ޞ洶oE>y:wZ|aǘd]_X'oyy~)ᔷyx=]^x:'~~%.?wE~?ۄ%~|Qse^.$Ofo#J=]ϭr{ bo0{utMgz\t#|b¢?շ3*JAq3B׺FKSɍVHy%lE?~uxǝ孧obSw|>UNs_E_SF_EI0=pٸ;eMqr5w|u>YԜr񓈏)%>NX%09?MRtչBKM)`)D( 霄R^P(3p- 9 bD):c|ZL/@T9P-PZpMk1NZL8TtQF Ebze#rۈSV+F֨/6 N?(w3m5}S*!пޱ_ q@[s |^/r9 HuQ٤+?tLObÓrf6'xhyӨ CQ5A7_+bv~uVaͱM1Ies< ٨ I5<[37N ;kiɦj>bt㽝2לOuX*".g}ĿϜ^[ZOfiJsVt3&uыGӣO 276UO2ousK韥.xEsXI~7[:;(8x[B=ry'~}/.6q0}7̟]w>9Ik…5CntCn}Zt*(O-##QH!==[1@!'3@"3cM=WEE8E7Ͻ;"JO5ԌL:ZS15x{X[Ig]7֫[Z[p-@Wm;LH =z@Pg@$=n4z7=pIW61ƣ]t!T38f3pz{H{qV}kSw YZ-'2ӁX :`<˂qm @KEp,qqp, ]vHeg9*Z10U3O+ dN9{B-f[lvȻz^PѮ>"ngy{lo݀ëW֏w,y~VY$81J/[=,_o_.YxW=-k^x|]5o?e$Mms׿~sݷW=?[UͷtFkݮn1ݼ[?._2/'E]=Ig:e'0<Ah3m| Ю8I4܀ɚix45z-o9W~$7]rw1q:5"锳fciy 2+8LY}[9GrZ51k!&#&r|hqDB!7LTu#ٿ"igIV,$A]l}Ye;jRҕWIauXs/k9ZɖQgҐ5h!Ϗw%"I8ݵ(3o<3l],nwyknLښӐ&!E]Ũd=g},soDNe;Ühcw ('F{+lKI[fl%'ɗ'"B `TzU6lHŋXE7@#10,K=􁱘CY)4(0%[}b2rH& 3H^ֹth53Pne Gv YiM@i5fR. w1HH%8Ԇ0di=rd,#-L|7^m1)t^AE_=QާtH]0\cںsa)Sg-4Zck# y!$bc#{s+hBB@9U`8bhF݄lrsW d6 Q-2^c=r3RH()>TÉ=:cA:0o6jB=WihDM1hHId~'x̶j#Z}GEi!fP3BwdKvbi/9Y&G0_Iݝ8KOfi6z0F]fCBJØSh>~dLЈ4F #\X +vS\!]'"l4ZT+V%oMɑ;RNJdԞlJ,Q<8M5 ^`Zcðw`zd SlA/\G#TTX!YM!1;,b !\s5uۡ8aCb a (4s>CiPs(|LCRk(z@[!/d)-OFRe , Z T#=D_n4 CK`(HHl4]Wc/d]ETf R7ZE 8j(&skd#]22*Vf`n q`>j a,jL4ٍvP:p(`Q-^qvA&`!\]B/ MֳP7kjmH& el êc,]'i>|`a? "=dHiD`^;Yޑ0tiW\jHüHF%(1G *6 -U %d塟;OnRcv/8 #4=|uic.OR)!5.x tg7 ?k ) 1 jH, mM1g8Z.T1< A-DG7A_c+PS`YE$?fceh im=<^X-":)J@2= l: ٚjjʐ pyPZC!xw؛Gכwp2 il` v<+lDMzXF^/tY#pv4DF&gQjVmA)xX{%ED`!-fJP|'/m$d%ۅPFn~v u\_POjyzүjnPg=a5BP!n~xG9XT3FφjOij0vMAuw5W $F0قFhvfc둦{'o;H#Ԭ% <%0 9ȷJU 0B{ w幇4=;U0TcX`[`HȮ(zĊ #@z;zyYOͰjGUh/&WH$Xnڰ $\F1;;kd1+[ 0p.z9E#$qԌ# ?z1>k~QSicяtIwmbJ <+AnMbuzϤGAL@TPh#Ȃ睃цt5}3߼Gh !F D܇|z zOX/.a&~{gKg{>+0ePV`_^mǕ{>Ekhhm-xSOF+{qx]{[]ߜONdh>m܍uHLO;u~q6?ݺ;icú@6w`(GX[WhGX['H|Xhw}Xm!Κc"ݺӣ1H{,yncO۟>o'+%;~_w|ӓy;!]R-M%%ŖJїQl0:j(YaS _rpU?|2])唺#v7178Q鱟6ܯv/`;%|Tf6]_/>@~c/S[ou/}ZTgZ3Il -&덱C$<{k%27ޚNzW=fRC@zd3;[+醖S`fs/VƵ${;޾_Mw-D&VZHrmua*לA!zy=^"֍"6}-VOEFG?1 'p/oo^!x(΃4ѭ}L#mǛ:H;g`UEh֥F+3vKH#,_R0̐IjiAK$'oN^||پO{HcMێ- GCԙEsl ~ T["?vjl~+I-y5Ұ1b+S)#6R TnE `, b=PcXw`?NpvAo? _?GMȑU4ڧBk g'n<õjӟoKdny۳z/c|8ݽ_uonzDmU7ouO3ՔVnf-#=x젆Бc)A\ZE_@q WU$BZ"*5ճ󇷊cJ&w_ov- ۲Q'/8d 5Ǽ|xh @R%8x=|rӑ (ni ge~{l(q{7n4z(Qb7zylVYVC ϖ'+ӳ4̃F,#LLћ:! 兹yXSuz`'NIܣ6Ǎ#I+0v6|B]lQͣEY*2أ6jȈ̈/pn-c#m0~|flcVSF cgƼA.'a'v^Rk1F XD6ri69g9rA$H/={#lWc%F ,*0~ZsBsM{qMG9+Cw2q!'T]=&0\RdYX&݇;p`2,E UVJ Im0 Mւ!oWύtD/G)p~z9IVQ4w>!=d}„`,@uOݫI:?-gD9U#97hk/uG *6O-i7șp^L Q8tLCթqmemh zNC|X M#7JeAbY☓(tNC(|.Q<rUYve)c!O~HuÓKgEp@Z&DQaB/Zjnt,YN$nG1݇ܜ!U>_j)Dd/&~\$)I+.c ˡNNm5[.mfݫ1RAWLݲ9 ")뢁mƶRWi}pjwrUjHmy3>LN3Eκ0P=?Bp},*$miYTkb6C[{f,Wx^uAVH@5'Fn8TZn8Dc\r %gϭ =xn`fh4OrFP[iѼiuca,kOm~hѨ+S͓#Qx=KgA8Qn SR(4мV/mTM5۾ڧQ`;#KrCP[f*=My$E_C`(B&R{kfl+Ϻ$hr 5(v?sQ tmW ɑ~vCZxim)mǶJTv'Rx]*pg^ 9^NZuQלDmE'Mrjjz6BZW=c7fl+UOF$X`xZ&զ^/*k*P4YR(1u[[96fY|夘/`3)Vv bT%5؏cgjІ4 MC襓~X'*EnC$17؅\rHKdD! >@e6N.6 |}!3hB +G7#:>s ȶULUǿNu5Ԅvb !er*dR5Jw2Z)+tq 5?Mk`󒞑J4vMu-廈0p? d<%"ő4LTB)q,]=@ڵj8 /P+en=NQs*ċfZdfZfZPNI<(&lvp6d'-sCT!꾅k\aPN6!i Q:yD SIF HR?{zg8ȁ l$h9fޒ˞)Vvk'1ΞH ^uU>a稨>LiL !I*ǒ=~jBCCzM @ +;p)&vך%S}_<ö ;DIj"wx|X ֬LD G^*Sd>z:ŧdnjMKi DۄD֐ͲBKW=gB@\Ձ^F!~x@κ m @o+zb9=&g-$}Wl1:j!m, ɺ[{ԍ +P$*WV]&<[zԛ~Eڪq R*jEղxW*q!7B RKzǎ FȄئ&vva625@[B@KX' zAH`uX. 3H<ʰ>T$=ućs;Y.{@] Z8=0mϚW觕c];w.% Y|`j9o` gPyE[ui]@e\S'.׭z z"XFbCs!->ZG6:gcФ;p}q]M$ dwÅ2T,e:ɱ=HKׅr;x)npB7>d# ^[kM=I(1D'-+nH` 3m C>"ؠ?L>GGvO0y/Q¯Uv?]Ѵ<m\;l) hʽ W:躨"=>OͅRIɯ(#Tg%=y5ҞslԜ391es#":̰ ~嶦^w%aepLY~9p"mlv(?:QqrFB/n FD{ *}^=[>yӠGOd=jP<ym#UP|WcZ6婜9|y6?~ZJA#bh!d>eEsyHzgݦO03FuA4r5 g$O}:i^ 9|'ZQ4yI@hv k& _*y&l!iMJXLp,4KDc4;'>).'W͔4RZjj8C Ȥll{E< 1i !"̘)%$1\sy@w'v>kdZ: 8N~J!!,]Ό|1V4u^ P usYCd8T{ŀ]eR-su7Ά? J)kR?xa>j6 w-#悇mwFI4˽Ѓ64ܜoK-^1@|;p%N,xZp!z9~<@{VMK[ؗ%k.a[x8'pxEЌ7tkqTҲtB` RA7<|1+QL"<~ Gb$:Qi.Y-wބ\:[}i!jbwURE 0rD+2B@%tCq50ܪ!BSlꄋNc"MA$`$iL3w!Яxo/'^ӾxR¡z2b k~\ NU4O /3w&=Gvl},YْͥDz78ӁSU0vAPvtĈc{FvKTO,y>Y?ӧY6vyԡ0*>|xʬ[s X"].aɵϋɹ9n_;=͠u;hSw8|ɏkHK_nd֫G/r8;_/t(P>ofO98oY]|VOI 78?%Y؛X:ԈlFߌ?_YdyFěo~MVOONCMѷ^g ןM+2CR<[wm37.N/{ĂrwfPg|vk^ptic<"I ^dQ j뷰0ՍLٰ;r ĭ @|54|՘J)K26'EA <C$e )(84HHݘBOh"R7&6׃>ޞejov)Wm2H <%='N;:7k#Aq/>+օǭejvלOv3ۼ6)oK@%>]>&"ԕ7 uxi"*?o^p;3nrp[~tzi_Bk]oƲW98ݢI4neIiz$Zr$k+-Zə ggD& SЖnclh?X]u*Z'e4]Z>yyr=4o{VʢqCUI:sQj0z& 8Mv»>MR!5[&4䕫Nɚ9«OSo0[uh]e`ϨbD~ m36 y*Z[8EbofTgGl]zQmwKɷ.iլtY c}*?Sbͧ:&v>^\ ǿsf@'ފmMpxm~厂۟fd&uJGg3G ]h+i nw@+-{ϏUUY+lʉ?]Yæ庽oEqDkgeECC]* hYU=]&cuG?WтlN ǬohBU|>NsɰVAf헔] *{C)]@Hqx' UlqufػD(fWa9!JÈ5h̨5խ*dt(R4F+(\:x;OA ȹs.[uf۷u*&[Wa &%dr Av"9WBwajbxt7LxveAqWԸZO8 ruZO͠{[^(LgLP0Mf~=gS?w^Ҝ[ F3sK%ekQLEF8 ul0Aޘڄ(Z=8nsճ/Ș v瘷C#̛b3̟>w'49qK']W^+J[pT?BZjB568[ͅvʷ[Ft!χ;'2jGRէ5is\0a"F< jS B@y=ZJ"‚'!$q'!)8~1dg1US>~@<` Lfzmǀ,x} _+ u/=^v=qto;է ʡ/t4>6-5BK?aW;?WlOd!CIYefGR hg(e1K+`dۤpVLI 6[hr{:əGirAF|IV0ܬ{yN[mg{_g#Frk cr ة*ZBK{Gٗ7^դ 7A8dB Gx #,CB5S^ ]ǥ 1x@U$#g~)67yݷ_],zv{͍_^\[\υOov}zS܆P}B Z8!VVKf(0LDd؈E2#.v9euq+RRv(b}Yjmƞ7*@XsZҀW1ƨO a@hPĒL9Qx2V0#Ic`XIф3a'Ɉu  Pb%0y%(t["o gag/ ^Y1]Q6\5jGl.c.#(?0[LX]=Hdc{Mhv}ӫ\ݮ~oF \1U>n.@,00'f??P񢊛Wxb - @qX~33]ǧD] ,MĐČoo T"pQs%[2vq}\ᡝzRtC'ۍC1ӎ d1pa;^e?=&xOv@p O_UʡF%884,p^Y\[bd3si"Z9)4_ @%$0Ren/˜ivڅ%U!ѯc)7W']?3,SsJW!+/! n> HyUIE~8stKNڀdh4OJ7gO˛oՒa]_]A/^ .C^\Q8,`kMD(6ib,ɗ!s3mo1]^W,n.("{Ea $, % )H(Q?w^cx+$V\ v t667jJ ?i^4Y*1`@׾(pM^@=fY .GTkkA jCA R՘a{׫ 2 nEp٤\Y8W-qn.t8L3W[{ tNi/,>\3toLEeovMyH:P6Lu+^{VU̽r<62yKn]B{yti@Y$ҐWN:eӬdݜ7T뗻DD&he׻z2 -&z)rq{ pU7M{t ۹lmg e5Xka׀"k"y[_vd<F* #!-"Z5խr2:@"_qw@5G! l|\Q[5!\Ek$y:[O[Xr9)/*_,'Tп @|h2o!I{}3 C3ÕR#k0m3zEc!%eq w3]c<{Õ0Hc_sNo_MPa8t>n{Nt2x{廆~Nr߾ >Ѯ&<= `&经kN^ -7ݵyOMKDuQڞLa, h݈+Q֥[6IfLJ$@pC` V,"AL\ZSЖ:XԿG;@Cqs`[ϘvA&0ѐ\6MP`['%P٪B;9GN|uGɵ5Ixcu(ħ`Zzt>ʾ7|H@' UbV1;"d!"aQDC7SXscY]ǥ" ڃj^G^,R T r+̎~Iz/˙M+<㈑賽!@8 !HI,H`f2 #vlv;R: zmGN<<08-TIڴ5Wet;&lQŁ3^1sa9PbNV ʁ'| ,z~7 TXhh΁P1 s09d69[]`IK9 d1|.?[-E>r+N%)Mu!:gVppD9{,D#0Dg?܏u\0vp|m .縨Ņ%yHy3@*(|YGh 8B L1a/'lsVSg`/+Pb<͏+qz@^CVzZkuux>> , I$02c 2\a3)x U&%,&Lp}5zp-v: \t=X $5ZU[C m8?v ΍Ic:֌1~{U/v(ڡjbh:;1F( '&T%wemI0&>q8^+_:) df5@F5aѲB&<ʣ:+K# aq&P$-!pg*Cp2 u}0lcG"(5q0t9'~Ժ _0Ȟ/>OVvypԲkZ<|ʙWX`"胑w߼WK Z,FϢhNqMHiu77fn $OʩD >r1 (!@ h?Jd$}2._2ey$cpp6|qzRt xˮ=m8г_(WI$ xOwAJ%+qs200) cW2xw7Csq {Ыo󞱙1sw77p}"pmI]BᡫO/N>cZK3JSTɉJǣ'7Ac֪9n1#gRjQw-ԯ0MZ?=77_%6*DI>aNjjiNf%+a $|b( Xia3jfp5ͧZ];r>P#0ᝇ1B-Ikl &H"'j"*ȑ:JL$"5j>/RRk:J8 !ᙗsR-%GG%:dWFVGyΥ,KK#r1@hw EKbdz=32hG?ג3DnT:?fmg}!Kq}4fZ ?\Ƿ^ϧvssssNNS1JnR:FnAVś9,\4_?_IL{`Y5\*~ܟI" lkԢ~N;bsl\o\۫Z+ʁy6~_sw(/NĬĜ8/n)sY&ڍфڲļc#'"՝GXT/]ͺxױ:jZm-&s{v2::+K[2c-27Wֲļc#}f\q-P/1bkcƵ?3սkq;g\Žlf\iq+4.7:X<5B T*;{ X&m5M{_->x(_؎ڵ:tus}4 @%"`H:'uWӟY? 5oFb\Q C*E2T Rp}ʸ:0 r<1DfPPcV[p]&5jNJLa:zgua AZ?KvρXȐϢj$qME "\dJ'FX6a\fiKX:vhL љ4PLLew-3\ɂE vh,+T@<@@eH}t4o#K A>|BVf]@BC(^0a^AܥX14 *v;B h8HhӺ5i e1ZY[;YRF[y-%GqnX7v hJe1X!$ͺ3kݺ@ h4Ϻ/ŮQٜ3'h? ƞ{#D;&F>>(DQYF#Ӂy)T$ev)0x@9V1QGPH7TSU!o7(R1g=_LB^8D0%{ &`Je1X*!ͺ3kݺ@ h(>sú)̎S rS:F6dYbFs[y]?CXP:> rxV+IIU7>|ʙR2t*iͣWWLo"Wh_,<(:: + 86Fi)RrFsLz!fbD}i!ND@%6aQ ,T X@*e~rRc=:iY>Ue˨i8UnFD"8 ƧsMk06Zp7:tjiwܛE ؘ rnlλ|s w+1sΏ!A4$8F {kH}/K;8b3{ &謋kHB[kH?,]!A r?Wؐ?-!A )L*z\fCS¾1b]BBhB`uW׸paR5,wfYv<{޹}zƏ)-avZ`y6Kv(EH?1K:μQQ($6I.\j"Q\ P/XS r:R܅a~PoY,=baQ$.Jj(HgHp !*#<9x4(@&8+!o.akUxbX8& cq ayĦoB ԨjiBlM$W!DUjlqO|Ey w(C A-$؍*TSNG^#m{Ks_c:%V-?[1Z_-f {#@ڙ1yଋ{F[ }I[;0ޱ^!2ZR&RTԠѦLzYh[3ˤ@ hu&m :cn=t[1֭ pa_&ڴnEV*蔎ƺ =A1֭ paJ﹁Tsp{M[\NDdCx%&:bF 1mOOBeT9˕f:Z§)AjÈR9&WˑT0vjh?+Aq)pk.d.!Z)FޓlX7MQD2ȁNmۀс֬ǵbFs[y-GqnZ7h*4u+AtJhc݆"nŌZ.!70!R/yk[|+UiNjBplu(?Bzݣ $yjC,+3g#hSrY1bc^=|v0Qid5P}:5}gշfCnmUaVg?ֿ/h^3W"+,Kv̋#0J0950^V@{ $7Ռ|PrU'7`.g4ѕ̾:f YF)ۻn~q?'1(Zf<T?2㓹<\o@?oO bX[DȪQ/F⾫ٻ9oSFrqz/-[)լfbLT J <xL&!ҍi[$0hլ"Pqy>.jûL]}gn8[!=]wF w0#cؒ^Dw(J8}MYeѕʋEF40 Bpr/J(BFΈ. e JOG-8׏R)LnLYL<TJ9FB'7_7؁  FtS^Fb>@Ѭ.9Asz\<Ej$d)0SNԲD 4,(SHna+<`qAZ c,20NPFXCBu%V=7 )̛8^0jv!?!V\[W+4H B="lDKeq$A8pO_$WcQ;ƿxw9dreKyr>3iqQxm&gdsWCՈV5~۶_;n5,;Βz_!}Lgښ۸_alΆU!eiSl\:pRFr߷1Hbn8ĢtFw-Ĩ X/^W Z[XhhRسR2GQXVOjtqwES- )שUAp+`bz޻_cLoF;(7{5~zt-:q/6?wwwE՝)!G٩R3DJ| VoT/˿%6m` 񕈽wydd /k#O:1Ƶ򎊡cY%1{g!Y2I rxwtw51.w^uݨ;W-> J; Q>FCrU)qd\7@?MGfr@{߭4{u)OS.o0~JuH5 0Suw౿ט|zsTOR-ZɈ|7>Řj2@-n-- c.VGT%)侮B|eBlaQ!j|TwZ#ހƤ;hY "Zn{z|5}=D@Ɉ|]nF4jf>qw(O >Nzhg&~C'^6ueo`Wߟ|UwX_%*p̭|q| afr{ZNEj?X׷^w4}= <ƽt| 2g,V?PbxBTPz "0b DZ:pd$ f bm&Zd ;7_oVd:mDl  &,Zx5͜51EBkA\JN![`b |*iE U-VM5rJeGM(Mi))swv|iig_9<䭩.yH DqJ (3  Lϖ ÉB$!Tj6_o߀RJ6\bw(u_lffI%~,tHZ+2ъb* ţX1>ǬRgPVC?J1W}vu>a7,s&0]w<8@etI 1 \\B5gf.v-4vMS[RIq*$6~сe_Y( ؊ DDqJdZkPuK"8W6ZQʩQXHook)LخcȾZeBwV HRV[HjQX6ΜN/ҋ>]E&(Ts]pHύHap#N:̽4P~%6m V[LY؎ekz TwT :ߟN~ˑ"8^͘`vdty3*G8"L^Z 6 0(d} q(cFI`Et%%ptlG&\ ?7uT%[ sjPkbc fv\>{W=KnڑƗ3P?N#`Zɶ:iEssYƛϷk lj Ȼ4yrd6zC+[IJb(_V\į y"#S yh-);F֣ã1Ǽ3MhvkBBfT +!h\Nu[TilBS[E4GȾ-;_:c֌HNizc )RLvg*bWb5nVzƕC 2vDž58 L<'`a ψ+%HUW4%Z5$`xlg;b!ށr![(#+`c8h1VI *k(3.dtzBdV].R1XueS;/9%ZuMH,x C7ry":cn=6*]D.TքYuH,UYw۟]sY0ܘ;O^~~Ykp~4Z^^-߂y;~\^`"_ {s|p15g}\-o2Y}Z\YY(ɸҷExUw'h|̀-MsДxS &&B313- i`"oy=ٟl q`Qu"];E}JZy4>Ac-㫋($f_ ssa]PnYXU,JCݵ1w*ҡYC5^isV6MqdB`6F35GFhZ5gwװ՟jDQ#0Wz-lgѻ9 3Zoj Z^WJlXH˦q*bNZڣ˃;w:>`oĴRem0ni؄격ݔðx1Qv-[mLBm| y"%SdO4pdМX00[15XcڂpaVč5Bɩc֊g-fB 'g n \$eQ$10 _a?$ y"#S _'oh7% EtrǨgdلj&$䉋=b2^LS˪\N-8I80J0&B+4mz}xӬύQڢw*x Z8ƊXu f₡Ԓx&t8ztU>v ]k %|BUyNQȇ`FF; k8 ^3nA>3X4EGObfT*3rnܯ)؏6ڬW0i+Rn_[H9ؕX_~\y`.X/` "bdwFor |\WWe~'#93uo67'HDa"v)F`c;oظLh0'usAW3*Q]ĮMn{$;8iL9eVPCH+B,梸CBJF5'|Q`mH fP//ыܩSqn,0Q3%dy~tW9 A?,=X;iYS5|ҧORͳe%:ܟ朮b՛ϘbVMo$l$5$I5l/H:&a $6 IP 3YS61IU!tjJ:&aQ*$cͿ@gwJ[`4k?iQ3 g}R-T5IJضWJ9[`b0J#+E>wFVJ%/QHt/qHԨJyf3DXtlD"_1,Gf!fkïth5K.c 7KA4~*L>#G&P0C9ۂ-<%#rHH&ޣdO5i$?6U@0yl TD$$械(#L1g~\}>]f}NXȠWcsμ[MS~4_.b[Q>(g?i['>p. ArVFRDWN*~܃陟ƮuL|5tvqѻ~> b|v_f?ήh]LꣷPoo.]We 0/~4)!O lb)i<6zRyWdW9wq̔w7v",Zu3fﮝȆx#VL!sO|/Ҁ1cKلy{[˺s03dT fiJU yof^_; A/,q.Nn[y^ ,{\&6V&o8-ole po.G+`/H8Jfpx/x/uQ'oӈWʳ͒!%=蠈+HAeC0DIh %1G(l]ڻDKʻ KEr\عW\*Jh.a%Py)8~؇Zbp$g5\&@y:ЄI88OrSݲOy"&ŠX+.ʔL <&˿G@a]3?8ڃ1s9kIħ}74@ؖ ;/+$Dr$)zϜ70ƋD) !0&I]oHW0c1@f̉IcmdI#΅[M6%Y)l+^N]_E"O}ju?:l%$b]j`S17\ ỈxL,׫] NL@863OIP< @O9{ׁʣ9 406X i$l(9 MS; ӒVr 28EL:FVIa NJ$`dfgR[Y65ա!G^p1G P ZDH xIӠ)u/H-22QHiRGK ҘR#ƫKb"I`mC,%tR5!id{,Hc]ja# QE+7^ۊj4)d8FڭslUت6u7g+@둥Sy.P o3ro<29sź%}ϺձIv $^+XԢ*m!KYG _^˯gBkZ#"O(^+kx;զu{LT0nRL9 iZim: .GV([.ZO:? m>_ +F|t/zh)ohe*z8Biqj] ҃[n;PZlZz%w?u0JӈºvKnꬭDq70(`:Oץ1r33|':XWq<2onI)ka+NqDX{m=vྶJ]_>vhIp i%loi,[H4JI/IRpY3>~@Y+hkq] h;M?RYYrV |R+LSM:VPO)2Ɋ6Cx~D3kOJ`i|2vi ԥ 4wǠp 6Wؕ0qCᠶ!ÅڻTGG44ۣ\.N!cSAVGHy͉ܛ U1Uѫ۽.d%cWcWM5gX׎`X@MƱdXEukѬE|QI+3yk]8N#QIFGk %Xk-kzv&X=HK]ۻKv=@VڥmGTֶTY/Am39`@2:. § )O,NYB ! We^_v2o8H+`YRk8+ڸ@I#U*:ʸ7\K<UX}Cځ쁎* 1Wt*ZM7򧷎hFDW w(%L=̩TLiҩ & L`fX2)c!lJ'%8roG0ٓʢmοӗ( /Y[mm=.Vyyncٍm=ƗadsL 2me3uA>dt~Rscz=fug`@=MaZr_3Yts"%* 2usn*u'f;J$K@'x pPSTg0YJ3K@Puߘ%F N.qx{t22ϚJ\\_jFeRq4DYsd{((*x>:JKh+I}Ha!ъD!o D%_-`\`iƶp*9I4I*\V xmSoEO;JaF&kzCE\ƃDA&ѕKH3oAS0kT a 4knx 'td1Uo} /Äl3eo؛F&A1 l5[vfZ C5!o!'D /:&m-0fmM9LY-5!i C8qsPh!vH%9@a+x1󞇨`-͟:轆i>R %i(D~3p8Cg@$* t^`ՃHc0D]fx IC[=˝+yrofU=m(6UA pEAr:N\]~?Q^g3x8pN\͞kۯgϚ\uzopDKWL&/Yѹ"m2mG1NZQ )fnOf9k7mVַ:]Yjɗ13.h 6sxS)@tY-NC qJ$ TU?u Xb57&{Zӯ$cV[w݀m>5$ UJƔT`&so4vH<#%X~iǶT3s!PmQa\˕m.bнV[I>`дk1+g,cޝ0!PJWDCboRT8gCg8C\`Im几B{$hBPѶ: Zlmam!60j !<;xw!D/[2|sr4}*`7*YXi*k tsaӦ" 4A k,!\޴@Q".!#j9PXeS3sYhuApPӞ>Ny{zzŞC LO+vUK2L61,[ڻuS 0D;x9۞aQ>'Gvnr0]bad"dPk[~_,cqHs3PاЉ_Q0!}~_I@)E2 O:'7锯Ϟsa4ݻ?{x ywCx+ apK9sҽ70WW@G .7|8e1@ks !_Oq3=ޢpn;G}$zo_|M~ r<]G:`FYݏ%,Zb8?Rm˹N]Ptq f&@6dw#[zip]?-/z1μ>\NoK"M3፪ͰsOg>|D_3FPԃ0;~իw07hc7z1Gu9x:('F`Iw=2NиP9<OǛ^LwsW48 OPJÏb`=๴7b8.vpo|r%?5A@,q~djzCd[ϯހp9*r̤UksfF>=<%BSgwZ(1ZÌ0Ńlv3D@鳏0$)6:Co4= ŷG'(,-pcozۣц!Tc[`­5 |fq_3d0X!Z4:/ofhG3u3_tYx2}n. 3~XE׫gq,Un&Hc(倯A 0|~:49L~ϞΚ>,sZ 6h' ,Dz,1YPL[e,H\(X4G MySdX4nTuB`yQT#(Hܹ8)Y =PY@pC p(Tc`}\qi0>7@ԣ*dmoXy5Pe4Z0t/,h>X%^ޒR i`=h8!\^\3-@&Ζ03F)P-r{$R:N=N!IpP=ɖ Vy9h;g2KcJ `$qb$hj4&A$߭("[ͯVpO] X1GFTo©k܇ic2LN,W߾}jqNb]I|,gf9W_۸61̞Bm]s7WXrUۜ]NOKä́"Ĺ3| qFSJYӏ@+یta[^8g'a2!4)w&eM>2Ys}W+Ny-gneMs݇V.6n=|`tJ (uQlk>ZC[G 5$USv= V#rAIGN0z#qZX@d Ҽ@ck -w[ dF^}|U <B=Sr?Dl+ydS"@xKZC#c ø@q^VMo-ڊ1)1!JUC $UH,=QhB0D"FST;1Vi`]@(T}D$5JuHZˡg4 S' C@&X^yT):5J n9h{N1QN4QLQh7\1L;M%7[D(pðFrN8ʰF.۪1vux͂S=bu[m.>;_}<Po8ײKs4D#D(KBh,p+ ɋ ), M͓׬Ȼ;}h': fZb\(c WA&BHiIH/%6^-ݕ9y,zbHTy&,#iJ9-Q/3yoa mS½6`}+N,xIzzFBr,TըR4ǧR0~>..OuR/ 'K/gȻM~ZX nķs41 P>{rvզ3P;8\r}}&= O MZف/WEvnp\ml2}8|޻OEA Cdd'i*Q_GWc5gШ iEƇΧԶegŜG VOj2΢Q ityUcP(=eV]ܧ~7Njm½nrֳAUyV SV*2QƲ}XŪ9"ZL O"] ,o*!:K T+9bXFTyyPq2α=

.G4a>>7@ n/=r~U*ܬ[!e/.1yT-dtiW?5Y``ʷܴ}94q,MlJDu{&W+ Vh1>V/_LM /2Eq%Z?]OC?fHMnq-Fra쿼 dItjV.bHShl&9$Xy $@ s?[Q9ay;_ jb Քu!^Fϡ.1mRȬ+=oCC]Ƈg7|+!]g]ovl{uaIjOOXۺr 7bKDmHR:1ʜd ރp *c2ɨ]Y3K%C>%wݞʤLcFy Kŝtvl뵏JT^ ]S!m)r>Xi+^=z{$y@zDɟD`)A2 Pk%(ؼ/L[%aI긊,Qi7RVeZYyP">Xq.5 1"žXs)UZ55c{{_߾ysjkd>˂^`>G7)f:f~-fвi (UUs׷eBP>e /q-[}p s%EB ;g؈?].g8KYDvjȹ=?R/͙#'řc?fI-d=ƽ*L^]UE_unUsF\8bX :0b.lya?\XeĪ ;Tԕԏ;oOc` XT^ moWEԽ wyuT'<դ@jz6-vFm1Kh5d@T W-_S:G=SENWih(u\R9Wr6dNqVN堭 *3,ҳ`R$h5i[nKuV{~מ%yҁ)9V[E@\]Yu Ḫ@‰Wu%~ ցzTЁ 5Dfп[~Mۘ#:txD| 쐡.SD6i^!\C!|u["@ye§. ˜ֵ|t`c{CB:y6gr8R7 8_>CdA&%Le |*R2+wl~tɻJ9AcR-&i)kK`s14$8qAdROjQ PZw6ZF጗d~ucDdi2Ъ%Ba80 o dƔ(EpDh\$'o!Kї Oa>iƧTHOm|PƲ.WKFWwdw.z5n._wjъ&v^/'E+lBc/q'ױpnbU[)* j296f: 𽆧 xeNsN ~KNH0uK|U:t>4tKsi*1+h+"xI \`D`PgRzpW9\z9SZ wmmIz v(b@ $8W/rK`<%A:&ɇ PD `^sh,cB=lV^ z;ZJV|5^B2 uA&4 g=` ߋ  C v!#Bl_} Kw͔Bn_,꼯'<8rg<?r݂^ogWCoǧW10&qݐqG` 0c$ Y~Y4)ϸpT=2]_Bo$r`G*&-p!X/fbX!u5=6-H ٕJEOvz?-TRB"8#p88[ơ`s#v(CsfW_g=#*=\9Sp& 1J1wu\?bXCXz5Db=Ƣ0jP/6ž;V[OGL>3 Lej@rd묤^eok"K{0[oyi0'pz;Bdz}&,Nex„2_.ܗPoY B8xpM`^C^m󟂙|oN8~7-ZS%Mrs>x7w`׷K,?_u1׷dy>w'OswT77 sT4fҭZ}ibZ< S' Eӆ<|h^/EHL'S-?պf֬"[[赢v5t}؇O/3 A9,t:aV3k5Ģ9Ӳgek6kx ɞOXڝg[[u+ln5dRYHaibt{6kys~M}W{GPb/s˱J ;/A> n=m oP\O>lB^k`klDo0ښZtt3OCC+b[hԮ}ѬhMwF`h2I$=]-wzת.]J>m[-x%$5s[ಎJ;ttnStޣR+ e7V'7d75x!!_t/9P h!g| 1C攉[B-qb`!{9-/Lh2#˳U 12I0iɬhY)ҞsP?vK1Qre{T,#[%GBi.֓nP*?7̾€I5nh侍΄kꚚz_U)8y7PBNYeIO)&PġAҡSr`9xS)p?IOiA߻E5mSfgQpJ|zJW&SHpt锗W pJ{~qS)4 )8^c\)8D48eɕ"SJ\E6[MGP |??lrN\!j 6 } ? MSy?7bOMk%\B z?~ʃuSi[*D&8ŠSm/ )V@MNr׽;eV&A'SCU o=ЃK_V xOgvNFRojN;X atb"au;]')࿟ ffi~@`>Y` &h&Er?ƒп 5e'wu>|㧯7,"5 ݟϮG.:S;?¯w_%W"ū6:n;;dcA(LVXq\ *cHy?:ŸoS*~40unFQ܆^ v6` l~P|`1g7~t+±zceNH_߽7 ]ьf_7+45E(xG0FYyj(n8Gנ%("\U,tU&'Mxu쾛LnAXJ .8{U9rt~}ߛQ7썙s6S~a@H*h|X4}RΒłAg3?^$[{ ƫG0~fHL~By t;)evU@Ah-%X +bHd8XQg3yE "p}>VH-acX)cXc<R3H'SKsڂ6kԀ#!邹^rRx,`q Ss4e yb`*3Po~&}+2v0^Y Gc aL!l\(֥.\Qt{ ǐG,Xj wthÈ^DV߾ f]j]UqLk[fej 1 ByAHDHB6sF8#<ꀍaD2J" `AE56qQÈp VX2exA?  %1#XgP; "rO߽`Xe^тU2\W}ž~۾D J`"ZA(`R`hX QhX؂ `イ:àiPA*:ۇ8נ'@͓YĴ# "͵9{fti0GDYB0P-cirdmZ |9Mn`NCTԀ*ɤ!^1 GZ J gD?rz$6Ј` &ZrC7y?Z#Tx!d e1#,`₥c-c tY!g(8vn3%!c\jYjXy`?w{3Lo>.ίt/O.Ū R'Vm?rG+ >=w+T9^T:7xXOL$PriA(ŠchhIgai0c`ѓ{(hQ򋙦 CD10VD/yRg7Ӊ ߜ ?afgYfJהIj4GgH\]no<׫Ը T"+ӷzzoeeBk.kB"0Mw%o}~Vx`xk/}|<#![/? |?^_kkrFE5/{e!k{S['q%'ٗl@QeSAJ#6ŋ4)ǣFǻ-z32lՈ";~X'p.a`H\3(h}Obg R@a}XӶJHPT7XQE_Xyw>.yh n~71i7.8ftÿ0pgGCjdM\f%>Gvm@m&}N`+t|87D7>f`|( ;x4Di%=Nќ|PrSc]EWn~řӳه3כ,"eqMoLd|VgFك%C3ev|"͞F7sbM($j!լŋ?xJ s9h־c f,2- Ħg{G^/FYdnf Ê(C5>Qׯy4.tԥN F%Df'3KcPp Tx<~|5 Sas?}Pv AW+6dG.yiǚi9JKI"4b`"b*D$aQ1䪧xMQ|oe,iCkՠWpwUӯwa}W7g^u zB_:/IqKH^gE\,W3M_@uE;Xb_W.䚔۹J_H Iʛ$aQLj`a\mfV̓ N( HP: c F@/þS?W&v;)fk6IP xLP136pA$K T9Lii ad%:ldOV}WQc‡wiCpp::5McBRD'a6<~EQf:KtJoH01E1[mRc℉5 6. 6yXI3ο%(6gNa\jIl%Xo$i04wÌ9V&;2T2T}̩\ gmO*"^yVRRc  a)8  %1<<&D#؍3Q.M=O㔠tHsL_[ʦ%a_V_ѷځڅ% kVak S2T$1gI+ ix :2ٺ^!Rl%HJfO:D~ywi} K Oy;Їbw8,M>a Ҍv11 JpS&ƕ:TzQvߘw**hJLw""L3c|֍O?^CZ=@ Ҋ/P!Nr/PcyTH!f6d?gf'OEf :[@Dj)@7h>o~8.><_z׿&?xP]01t'9D,JiI.l% /;+kN%?[GrEI\Cؙ[[@Od/X/gYVCymjTcb>BS'4\1RIC0POu|r}@U:7,\Ѿ,׫Z3~=`PV$\YT۔d;9JAt;]{4cs_FO+5ކv} ϪU^Xz*ojI]uAwq.%zӏh>ޏtKus>kVO_=; w~kɿtLK%'eS-kHrCJVh*op3=mqIֳ͠x76!rMm"GZvϰKvlcءYljfIV[=撡ySn@\$ul"^ipD1(0ltQKa͘ iXԨ証NJ\nrt||}}J8Z3Cq}Je˺qb\\Emݓ&1Jy#_ Ivz6\7!-( ' \ǔ'`M50Ib +I"1%8mSdbʝkQ LITׄs&#t|Q}"޴QUX7]sklowgmkz'w=xULbzx<}7;776S) .Lޑ-f3' NK ;Җ$Qv8;!iB,eVKcPt'uiT2" Na؎ JmVLU0eSs S 9U<̙iaU6LQ~增%\kSCgjlrb\䪙gs' sa2a.W75W-3HKS=ݲOw~2JS(E|uR)+@_l_5J_/2vX[`7w|JxΜiN+:eS-4kw9jxdݷc]I;ݴ'M$goClSM1_5lrSehLzfw7z[/?'-l'QRǥ}AM[],>`P$[?x2q# R8`~40M~__+P%KݻeqՐ|*ZG(kZ7Y պՓA{G5v60]Z[} UN r{ƈVW!SUtܙuF6к5!_SkTm<~m?JQZAZ8Κ_XXP2Fo-q'E]C%^^ bڛ̦\>SbFfaJ AhN>RsS1L Q'Ww׏"R]\oȦ[ܿG#xkXL_}!Vboxmب$r> 1?a2""F.RF&(J1 N9|֐|ֆ(RacB$Ҕ<-S)CHk%1(cC.^]Rj~ %-0Z. g ãY|_F:]ꧣD n&%b}4kxIH1~)+HAx cD-M/ yTDti7\4S")9KHU1'# n&4|ӹjr/1)ߎ"m\uRy2>n$WInRec2:$Kt ]-NImeL\ww'Dꞩ Md$8c"&);%B\{n u=guG{gt'ck! 2he4$ҴŸ<߲m8 |4ӰYS *T]K̹1Pӡūgl *X;654mR#U&5zfov0+_~Nl}}g xnڊl]㎵b*MO5O["R|-f Fx[S-`|87ioyƾ-ml{= G rԒm*k\댅woO`2)^\Xq7lr7 磡uO?Fb2G/"qy>|+|pQ:BI1?RS/,VsJ% 6yn~̥-|fp}i6{päOWd5%?,[YGք;vc\؈DF&* ;A Qyth6VF-ENuVL"{XF?ǨaT:ۃxh0Тe၅; gh;UQ^-*Y/ #Mcrl;^6PZa:oB-rԂpm^ݲhPW$4af®/_-z<3 8=ya&>?ee"vQh( seTIi f <(k`Ji<G~`gr>󠼅̢ >2Μ^bXppfEXMyRB/ 7 DO9Be>6:HMOAZ IE0=㑨-UDH`{<ܠBwNXubĪ#VbĪP:&_ , Q`J+4yRd 0 ˜BrpJ7>錬gj=}7U݉#}5RamNˆI}8#ɣG>N}:QuoA>RL'-KjT3#( :hUJ+ЌQ8ǒ_<79 yT/:)wR .N[D peN ɝ4(RSpM<g`Y ]A09›. T4Iwzm-eR`O+bI)pu H3.L^Y4M k%HBdVM'ϝĮTӎ1F?dfw7J=arH޾z}7933_wnC\|ѻn?6S=+}PWgĸk 3zs;q<&.--X 1`\S!9x#`t X1]܁Z BQa2oCy|#s.5Vqo-vA2x-$F,=kOf2nK7UpA,lq VaFb8pf4, O6 'P(S$D3KZ8T4tkVNr^Nf\`&V{ywc8Y) !Zv_N+N|;.]3f X&2lQ% ,dfN: p0z dBp+c6 (ȭ-% !FҀ Z $`Ct*S<9n2E \vNCTcQ %\*^&7 \"x21@BV[D~D ZBNd^NAo$Dy11VBZB&)TR.m*Qfy(Z3h` !j[m8\@;U'QԩBm8 (k*mqwخS#*T+`9yԉְ %7 a17݈ZUQ u8peZ[ lRqIwlv-M9৬="_Cծ@JD Մ$bƨ^?YJ–б4h(G2A<5c2N"M%ieɿ|u} SЙdf֛ƋoU'qlHK$8UEHᔫt7~r3Ο[%R$].9fz+#HpAeB(יcZ6K՞tSRKaQL =fT>7~sMi|ĭa4ڏCwrv`n{f\xڕ6TI9vZSÊie |fuGgF[hr& (#2U+JYHMyi@\RQhULG®=Z ~ݤOgIHz*5q&qθؖZ:mRW؇PvIA´5ץ:jʽWQZ<_:q =@R,fb+޾O 5*$x?en݄*Qx/LSdD1>M_l]nmggD+v"dBưRRbb#۱ 02:|F*y2]Us̟ 4g G)"13RXl< @!RHq0pPD$SThgk,Lu;h1" k)W0+&"H) V`zg430 +-k˩&0B* %VT`璘|ƍa?Giuc&'g .Ep"2;Df'ER3Ԃ'Y%c>TpYSwS*,)i7j V*?5`âON!~f-Y`V-[s f!$EFEh#n9iŁoEo;'@E"uS.\yo{qarGՄX)95O!rQ`CܚIVڛ5-ԭ vK!)j!~,^w,3-DݽD`Q'+3KASV=vkEҰ+ [+.:\v/]l֊qK VP( ^9<ۭOj̮҆_)=5O" +h *I= Tǯe+x9ld8+Ui-^ &'seβ@ 3]C}r [5+͸O5S胟:@C1w-ȏrm3F '%M{DpC*Pcm/:p//'4ߗ/Lui~33 j@i'ĉLHPJj>N4xK{\j򰸦/wr/OVyb=a޳㽟H9-RY<0Fjog)\.lys_+)N)n!cv8"nы7qG4g76h# EkfI'O^<Nھh.: ?5$c @S;X l3+YQy҈m>^%m|*&H 4 eLeY, Az Tp A8b7`60L-Ja?l@64̖XNҖ"Bu`3fSkKzy폿dm8AR[p!nj֞$Fݘ7gI7oHH/4!=`"z#e<ӓ3wwQQ6 vQn$`?uNF~3 j[n;շʃSɼES["-^(T| tWntKG9Kҫ+8^8񧜭|*F(.J'<'s dq5'g8KˍJт)^2̗K. Y'>' c>8n1Ϗ% dcNKyQYLVN鎽5`k_dm$V&5s)G4057#f8DiIh,G4SA["J3)mX"R7HETSTa euUFIP}-Fˡ^z/]Ӡ75 8%Gl|X`nuǓiia2i|>L0L4?t]!\9-w(pc3%sQ)&":6F&((Zg'%PB,##3xfU la|/mH_rȲ9{b؇d ?Nlfݱ '}>F4hr.$;`w!\[`?zg8m8;j^? _]'6:]rza*U!HqX4G?W32_g?^ui^Fo?Fj󯽾|o:~?w~o/Gª0X027~jwvG>yw'.~xEtjw _sTS ~)-x^i~[>o>?+6ݳ/7}Ϋ9O/??{۶Ŷ)sl^l$Y/p.ZJr43Cɢ$*!MRAmQp9gu o;mϟN5a.ƧFad 8]\3Nb7cU` fh!u맍~z.^B̟|~}L9IILl||0#;K#E#>y} }O =ȒOaa:yyzO/?^thrdO|xnvуW '+{]᎗׉Nd`\O}O/F Пf.D8GGn S0Q;sCns8p.*\GK'_'0lj}4OIE0rN۸2h q]Gl:C7OG,{]>Ξ.?ɇ?0?O~!O~d~C/m[jxy7YVCfع+6nR Ī>9%U}ytba7stuE_o|U#\Nau(_ zoɛ]nP W"Ua!)uC"⟃IX$1ׯ³DeRJ%+gFZq&5e5L{͍cV#@^i ;LhADWQ}1(0CDN-5 Cf:=L/XE(ȳ,w~γomre %o?#gOQAQ)9c>~Ri?:Xȓ3\z "k4|&RZKM*547Zxy*nif5F9FUKhTbZbAW1_^Z3 Pwd :,]@A| ,'/d'_Kw--<ФiygTn1, cM$dI^ Z 1,LۣK@JSݦЛ]y'U70cE^)1kΰ(X jK F /îiR*m^ڒ.Bw4]ɕBXoǓ$~E%=%ڻjVs'oHJ/ƙ Clb0NA*归ؤ˔̔r%G;*k}Hȓ մʦmѶ7%foagfɽ/1KR\`eˇ,zr4v;,͛=S8r^$oըpPW.@\+Ԩ3`:,/v|4,$JV8xM u'B%c"&W]7/YEA5EaS{-h\Or9i\],vK=Y[#w9&Vt2uW>ժaT2dz9Gk!jiګ(/ {Nb5(fc=yx丷3PTĐ#"5p0p]̾E(OzfQ0[֕Ƅi+0e9]*hF 1vcsļehp|fW 0*lk[WnD}'Z0p. !%IZ|e6<ХįRmJ'!rF}DŽ:AgT /8߷/N' |o)+1ݭo;frUg4 GHbOqxuQM ¾~b{[Y,rB 5]ڎQT87*ϵ1&`ۡ k1("]EMauY NTil-8՟"e;$W.?B(],Z觔eRYlbI$-K ]\Y}[NmӨIafUwmmϛͼhKF+̖ŕySEUZ_EmdDm~ Z";u훠kTǠ# @>2t\~@ q z)W㺐ӳGn4`Hn<9!LP). :ǔf MY. ĿOS0wBQr("QǴR})/;Kvc A OjsjtXbI=>P{VzY~^?6!d`xstᘋCq?_`K"a|ip6Yq]O br<ʁ(Σ8/ʁEv: G1a,F ܚ; . ʂ[tP[˄ t0J6S.5}Ie:z Bţm`Hwo}k[32gɐgUgjΘdL-bhy Py PTE1LH 1(H4q2D)Fڠ5Ձ1 7 [[7}Zs-:50\#[J|}字kgWڀY`Zh:BsQ` Je *`Q4ÌY嘗 P'Zym$#}$ UY<470Ә1삛5+- LbI@C}FIcZo(ӄ#0*|!P9͙Q"O(6)悷-R:'lj7AP#g1gd A03}~ހ% s?A`עbDj{Ū¶K|>5Xd#Cy%~_[Ƃo @"˧ϖ37ߟ'fdsql/\ų`O`1}ҿB'_񙱙]fo?3Y,sA)$. gۼqMERJ5QaAmm$d[CF;#Po6 ږ #h: Ȁude0S1[ 氳VF* < ldd/j 84rTf  {E $*WYf2pP8r0KLL8٫-Y)Rt57]'i {dᆯDOFX[:MNFy==) 0 ƒo0  8 3p IH#8J%&fHF2CX 2abcSPBSBz Aq\1pJM ?'i:˓jAUP<,/2Frvǔ|V`oޏ'ΟFHuEOFEo ]JrL!ᡔ0-AUt ALVRj&΄Rl 9$HKءcd8ˆb7l18[1UU%Rs^$ Sji$lKdX#Dզ*a,z}U2qM@gA2tIs!pEj!ЌSQ&r[W 2tgP2BgAT$z-3$3f ނA[B0{k`%n4ӭ )m΋>4&raeD%5gJ$`$3Ef2^f.b, <[Q)u)&HM1`mI fa〰YO2m>DpvaAcCWL)A-2%(6Bx'r4:nֲ5, 7&&H#`$12KA72֎YM+ƂKm 4Slu0V15{Rƒ[E:Ɣ̤Uȁ n/QiM^nXLԕc-x̀!hQFofL, ?;3toуe1>d hpyw0MeU2jpڷ5BZ։PTHlכi\VD\o^t@-iS9ޔg"! !87=yRyro/ŎcĹVi1fh0 IM͢ǩ0߿z٢uS|`n-a.q[E{lf7_R9Qh醄TP;r~CB8` tso*dӟDz )ڀU|>v4klis+ڀb p~ݛ-K=l+m#Ib%WaOmϠ1G==/ Uy$F)T$,fU8@d\qddd_cQ/*Kq녟ᎏ'._ڎEƮW79Om:xwML}{'(nq{x9n M>_wiq9Z'k>f>ͣU&}f/% Ti[~d{vπ*?ŲDSX˲149cVH'5̆z!WBͫw(dΆM,8h8\4sW-1!Dbck- Y7FI7gl42hLdVPpCM=h?Ecs4$d"yNH<5E-C|h潎^D:zuhJmo .%8oDD Ό*2,R"9hfڑأx),_UF]$j_խI@1|1p]\ngf+4cFOzOx#nc2&OwgpG˷e~..%Ԁ_L4"}D%n\5,hf|Tq/Il hz2bJS-] BvLBt?w{vBdV/5A^;6yO+bX>FqDvl-Ҽh׏Xu׶!cՒij.ARACf9'ڣ <-|!NΊ(#*,G?6W-#.o>Wa<\QZ~rV~6.-D7?G{m9C1lDR#*yXz_nq"ѱi~ BUkl XrpU{7^jOxƁdkQ.]ǎ4䍫hNI&TX! w1G%0?~?ÒЁKVvt`* 4i:p$1cWUYE?ld0P1|QJIf$Q Ҟ\1O;k-+[^P2AM1rFb.K_fv9jfnŗg_<_}j׀NF'(.w@QD8UHMN%ӄ  A;];=G֎3R_\uMұw>O| GDNofR He-F +II#t·rQ Xu,cь E(Т5iBs2Ϣ!=G :ۃ wSZ mx71>-a5nJah є='gu6F?MWbÉēw,D[3Si{&Cݗ O{qH9 Nj#9HvnQNv(Pndwn;YIW j6e$MN&h')*()ԻBj*J8(ą1 (]B(Ď,DQ+X>W ?,ԫf$-Yar ಬS ʅ2(J!-i#G&fM)0CE k]m$5 dd!PPiF W1Wj!խX xSsT PR 8ヹqJ +a4#:_|BZ@x#A"a7LEf:RY5HѴ0<` ᄣd!F,B[RԝЫ^5?礇w5?Tt;pj5=}"ܾ.[JT8縶]={w-ɗ B3V]GY,5Ѯ;Fkơ\r37K q[L'Nя#LLkݧwl ٯ߾6OIɻi^*EfHBmb[o[YToMHF.ۻ<5\n%Ci5A?{>O%'6 Z]ͮX P#cW L`̹ca< t\ mTr1xC3.٧rx.PsC':.)WU=^5/j^DռyQWDh F)j1d^XɁ:ns6)].0e׶L Jй[զ ebt.NE澔"RsGtJoK"8g1eߊ>?4'cbC?$.lnCEp#7ؘ?1َA9LqĬ3@s"fg@R)@Y ,EJ* lA$?agZzv& 0D)vYB0V:h82FQ!K7D8K9QFxO։Z'BSɡ|-mTMnrfA;؟Cwʿsu D;ĸ0E>NTnhbW-j6ݫE֚!y`ْDD\JP$ݱ,"ڋ(W8 (Aor0J9|:[DB>Kzb7*qKI'zrr4{ռyU"E]5d1f7:@2P.)S@@cQCRhI$9|rf!%!ݠfG:J5B`Z]r\R^@ V^ '#yD @ع(xl'dPÍ='yJׄ5_'&Tqʹ*ҍ)c<@}V**^I~ @,^)nVqn(HQ}OAbq;H>cO"^L%wvw@F^"?(_lge>p?Oٷqe g '/wPKB?Utg 'F 9[Z:_)50'&ͥ}WLU ỏZu%k썵j;B6Z >~>9fj,MIKScih6MiX:0v0ӃKq2:JxWwvFڱ@W'uK*X*so^&wkn9kՄ W[I>EiU{kvjXwǍUZz =dIX$z b bjcllvkx:?z?-lo.rV̼_yg4"7Xԩ~Lwϫ_=_Dʊ.v=.xL-]qB W3z=_Dxv!o\EstJS8dúIº Eurƺ;}[~vuBC޸f!GqnZ7U>h᪓[2! D 3qzd)FogJbˏsAg+ Pz5?uhZ 5֎z-qՐ=,^*pՐ$3A E/h^ݾdX]"whW-mY]wbv~9-W>(ɔ?񀻝pF f})jYiZǍZ_;ѐ9{`-;HVskQB${>D|$˽G%F).Hms6 dAk,WI<կ2=L^13qlb1^*r;q_2< ѽn)i/x.IL;k_8SG$- /W(ZAih ǃ6&f lJ2T8kaG-S{FW+h Of^%ݠg=jNgyLe3= YW:zN;sinfA/]:d.?l2i]mc-o0r܀7|on6hV`@se+46jj{擟k3G4)XP}گш_8Y./nϗtQZ)?ZWW{ǡn_OOḭ^;[(WXf愁1,17Bʪ}}J;z[L=ף-6XS)Qr焿)fx2sm)\Ž)u@:E z/bwQ;+v.vTP?҆rS9A^'D9Rɬ* 鴦_S5pHTFz6MS Ҵq*)*޾ HSUUbd c)+\Ud_ޖ\`ɝ t,Zw" 6S00ԕtP$$o:G4|AO qB%sium9έ|)5LSvi6B+jL]K91/-e/Duү y 6_FUG< JkH>Ū+'SM͹&?ZD gL?"蟒C.|'Bh_ kѪ7&8oWᰰʼnx vzEo# %; o@ll q[$|}MBם2($q Iti=\C@.񋿜F *+PD3B[cXٻV, Aꨠ[SB;Y}̙jOzq#NG?f~W7P%'oK̅k[2Zw.^$P>/s RhaPbnKKH%q^gnvtv7OĔa&SB(xp()zvdHQ5v,j#jS+ hz%/҈ФC\maa^ AJ{\V{z W7oBu9tq!nSxN`FH1I_yDj]M-UG\SHŸ؅f!5|;sqV dM=I$hiӝ)۟w4 @)Z EiZWK\;hІw,λ2c޿, ?pG#z0iDB\[~sځNa?c nwenv[ {ﷹ|q҂q!Mcjl%%=n1+؁oY]Zy gx+*H nCJq#]ceцw,70Yf&wLԛ0Oצ2kTpȡ*=S%ZTw&1!,0%ł-~v'؋PHN :UXRXt̺GS1RJ);(%mƥNѳ?S?\n3R !Cb 4ADrn>'a2z3_s|k;KbK"Z%y s~l\AUѪ=8;Kc&+7+ 5;{ w)4HU'MQNH^Ud7=3HqEkŰ>|z99ۏ X#-7ܢ1,S\&chee hpOGS͋Ɯ]^^< L|Zs{LwvvA^ƕD0@3hІw,ՉD`.Qy pmckYWa5UC 3h;N{zEvneBb%I<5PU/햆 nx3YEj]G* pBs||A.r0aaapk@#n}v5t7͟l;IcsFYt}6: KRyͅ(Z_fd(bd@O_h.]'JӇ ʲ4=jiJkon $P'0,๕SPI-BnVݎok@G%KAda9 {*B *i)Xd:aY:TN MoOo7<V~]RVh'o )%;K}H4W?ٿ}}l:Ce{T#\/f³,Ǽ˿&;KMB|wz:GPçL." h$Gv` *ڙĹAE]qzI<\YM#2'0peˣuvuM]|2x:70ˎ#5J)<.酵0kT™)AA^IO5xn0wv;)i4+B(BHc;(͗v^ݞ%r4ޞ/X^N?qaZFf\PZs )WheR\ GyHE q] ߘ·\\ڃre^҇F# ]b@%pvw_$HO)&> %i=u1s0hZy2홁4t<(J>L+dF:.IFFea2yrK$o,wPn@`J*Q.ayU+g>8#B a<2Jh]m{nˣT"0]jcS-.[=$%eLJH*ARP os*$CUkII7Z j@w@ck+!۞.1xp;Vo 3Lg6"Ln/$M!3V#K$;qglʏE?j"%R$;;}3W'K80"p۸l/a%ʚ̮I!)WyJ<dLЦ ix߱ee2%^᎞~t1zxzPoR25FкlZ'6?5$8,v$5f&*cty@A}OE;P%cb3ġZ!bn)c̋;"w4}N@bYW2k~/IjkE^Cn]k@^Ȅ.|r悓|rι29G@DHoyҚVdu^`+S׮>UKZ .UIysZ\7֥l]*#ʬk>uJGϝֽĻ!z$ܖv'@?2'6Z0vC4mXkj^?~dKMǶ9$P(d}~1@|1@x^q$#7,8E02g|1G<1FkfU% 5@Uy: uzD* *ăxpbo$7.X+NZbf2h ;+ }jI @(h30c|}D`\))U`2ZK6`' LfBHɄ>*5BZjèo(//?EZU$ԭi 9O Nu<$f+i[[8Q+S/G =Bem/C2R g 0|}8GJndRRbc):#L%3Jp`P[RmiIW֊ݼo%ٛn? [U jf|2j1 Y@7[!/ 4[*5> B?wLJ\Ҋ.V5;d{gy#@0~*;n{+9zI Giw眔\zK-Nv_otz+],&|m{s AY91eMUvZejlٟ˦[4 cVb"!p:Xm A@`6XtH!%aռ HA3QȘKshdХ$DN1H qV1+@ XSm@KGS Hol% e^ x$*\ `EZb-uyz(b-I7]P Z o> ߕ2Z) }GTB7_"S=wޅ[Ȇ_ ИIL뱐%VnFAM㞕1ZOcԄikɭxJ XZ 0;6p3sNY$]*w1 0졽cB4 ;zaw8%.z|p*D#jP; (-3ZGD Hm*ZP%X—Ws%%wtX--sQsgs^VK JR>[`phD'AwpDAQ9O6ĠʜGIL gAAfGҙlOvły+^^;hHi가o]Fs&7QHUh>)} k~Xl<ַĜ%_\߽).ڄ(HD2e6H-qHkÕNd .V[ <|J?Y &yB #20akCfA2LB ع^"UwG;?/DN]YO?r!lqvw; m&_~@q\ܳv]gskcۭ={ٜyۍ_oumί{7{m;]_*Fٷ;̃<_w^wGsw|ns@:x~ͷb6p4WR$lB~,] ,7Cb葷gGI7f'Oz^Oϋ/;C~:Ó_kƇO{0}UW xTc}wM0@b_S4\t^/Gƞ?7z)Oliue8rO7FE^gvNOaҕnzlMIǛ?8myT|u;⹯cPiYop7^?ҷ պ&*87C.y* 'sHh{\<:uyE ~>߸i%@yf,YLIFh B%ۭcV2ق2a<sWEe"0V 7/82>xe?m>ɠa:pYl-"䢳1:X$Ot"ZCd4$9Uo) F'` fAowcG_/àfB}@Mp `qgDPe&# p-TgEų p'cA1aH광7>wH߇G`Vո$ ֖_+*xz(pҴjڨqP[/U U _70 W),w`` H oQa D a\=ɐm;4rOj;4rٹK,yUeeeeEV KDv]>,FQ O9HZeuZ.RKަ v ks k_¿e܈f^,y=̒3+z=aΟev].y-+LHQФ"v)divi@p2 A$Dք()QY8㵅,2>UrW-)ؓ]\i nr#wRnNʍ)F.\)Ƃe4%UZ0Vs簈DiЄU-U-|yS!(^Cb{rv> G+Q:ap K  Szf"3 {NO k_:L %/hYfE/à?4}r%#"^NyUJxzQ^ yr!2>d.GY>t%7$JC~w&/!L7$/ BDM 8} xϪ5`1gq ?bmzBUo_\.u“$Všq±īJ OaV15wJaq:b=#X:$_ VV|FɷK|+.֬"_ʐE[<'RKȷj*A/opc~$&u @ 7O**VUMq 2uլ-`8}ȫ^]N W9 ϑ.PEiYi7+-MNdХF`UY]R&ZEyY0#!u|z˂1Wل{/5f_\/sc )W揞z̋;XʀЪHˆXNZVEZH˯<\1b@KHVuTZ"b^UeDnN\[e<7-ZaVUe[ ve` 0~$/RS"dRB%VY8 J @d) $.q*Qaʻؼ@ QܘD/.Qs0_ETE_^r  0sQ%*7+ϵU(yKRɦ,JI.䚐ҹ4MC5#AZ Uh%ir\jR!@ZL!a+pT'IT.&FLE8tOj=U1YjSg=Bl/zqBEQƝ$utxAAigEzU@>ux}:7Rb8B v|$ t}[spƎCo 73zݰVMCuBEJDnX"r 2=KV[Mq*HڰR }P=;2BuV{1Xq| y%)yIf^%[7Kx2޵Y^NJg%$ӛ~OӎDNfZuz.8ÔSk0I)N^q]#%]χksׅsa/[0>N]ԓƗ};w6"" bJF3HDػ*cie4$2ݬ]:H9К+_"L:w^̣ ^@܍?^Mx#: iײ$$맞mw}zz #`Ə :p: }|{m=RWqG\h{͝fkmthjGވ~ܼ~nsgwr9Gm7޾ߝy9^=m=]Yvye[ZOk}N{HʪkLjO\#S>~F}nC뻷y1d_\Ƹ.x這_-e 0~{ﻝhp ^ y{HT:cgM.0qHsM~m~탡 S]x4?\5w'%s=:%?LqΗa_j\Z>?mۇA2*J j\ 7/3~$mWxT+5Rk.\'g *R U^.o|4uGoơoLvO;b]=?~As@!Rs=/_uwÌ=bγppƮt}AsLg.c]FW3ڨ۽y%CBlp;7tƍ$-fh !D,*A\P($42- \a])y)&QT*WkUьF}k;MԮSeoԨ k  ȕǧo1YnK߼uh=cZݾ۷RuVnߚn?kLDHN{EP [!M)x>I}uQ)({խCٺOn"߭T仕|RlYRŔH5`cFTT``΃F{uQ)({%!3`m?1D>BL>U*:99Q iթ>Ds<}cR/Z9 kllw K`c`w/p?@1?_ݭƖ!)!4`L2Rzhd[[iWp\+~5#l6Tx!8#bB؁bKV*M0R5da72v@̕{ CpnLAzZR)DŽ:fG/ 6c ʙ0|Ed6"LNF.Nc0-D4F#ހ79(y)!=Ғ^}D`?v[-j.FnѴ Șxȍ( dhI膰Bb1t KnoAE(KEBt^-JJz1BXxzg"~ `etd1,#3-Qd8D"GC L:] iz ~"H"wm=nH V,Ҁ`0؝ fj+V;`VEE:L'2*~X,ΒB9#flK;^5()+?yz@Yv]9ePz;̔&m˜EW~$_*8M{^]s%_c&W%l'`GXq.C$UzΔ;ޚTMϫh.w4Tbn 1hD&g퍏. u+. (К 5ʤ#E^Җ,E)[.'!0 2CGH}K +:ɹjæ\KcW7(Iy!1}85Nc+S9+9ib 22ZT)J)d(V!aQ26^8P~T،Qx8|p)1'{j؏lwllȇ4|l1 dDQzT8; 9u굊"TmJ7"lI$_Y<W]859=jƩ{X^*s ܬ-M{\}OE7xV?Fʱh u!HJP>XCW[QR֕PP"*|/ #lyF_f22*Gg+3dkP$}'(m'AMZēObg@`x3`!9Ƹ=QeSQN[bd&C>l@P]#> :LE$,<3 i/Pvjr}C.no'')Vԃv;4{zҮG7.Sp:'7Os"T6폅}#$ Vu9wt;EvB?vcAmeMLƷ4RypcwJB'717#\&[#5"}UZ|ݦ(R%y'"N+N:="qILm !L#ɴɲ6 W+Ix">E"5RlBNq%T(.hB"2QJ'<2H5dFi{TI٨$]3稴ԊvӢLYw/4 ~י)*aWVH&'F1Sg;{v J1%׊ D.k:kןaP qs 1-!Rj$T)^E$x*M6K|O2_ ȶ ڊ8pJ!e9b9e49>j ֌ēktd4t2‹K\Co[~d˜!%.NcR;$K=$ǤӬE?;+=}]D-ʻiG$t8X\#PjEB(΃  jhp 9)@IQ9.~ƥ# jRFy8h~CjVri6b\Cpi8atC%4#pyA=J"j.}*N!)tp؇h@_շ ʮZ/5:}zq 3V3GӅ!e Y {v Q ލI6g8"AwZ4$H j'.irjWl8WJc|' B9IHמ'Uz͠g9(QYɄ 1|f丢 ]w-% ,}Ĝ>wCFvp퉸QIf)R[cpb*e;Z{p'b z;z%[pie)Gesp )zX"hNX:Ba@n_wÒ!#*ׅ e04FzLZҢäyfKf$0r|0B0oA,;aq]b9Zc"scA= 'o1THިH_ur!-QL:tU"_uuT.,Wֺzz=\MӖ9"qdy3Az"Qi92%Ȓ!-Lnݿ/1N;A/u^)'Ղ?+џ}tZ= 1%ѥwkz ~7~lh5$D&b~lхbGU .MˢOcz R0SxT|$=W_rN\+`[B-v6ho-*C!AU_.2H jMQ\Y7,yV]A>gfa&k}RCd#: y#LMpdRz"!¡r65 qfs5lh\;mz#Oo1?D2wٟSH6UphLX3hCmg3Gx)ih  _BQ; r&WblȽlMk5@.~ӊ\61@0c-Y*8lqZz%zRe@gTO< n5Fw۴rX,WrM[z&ޑP<|P.2ݨ]^kpYn>ߤܯ6ڼ` ˋ벸]]>sY/={^xo~ӿC8"X"/b>,>+/4RooJ>ۤ ī%R=W $|݊rwg ؆S*{ji1˛g.mv)45y>{\ɯ:Gʜͱ4kG1Q@%k&<\~^l/u?*&xX{vH( 4A$~3oWenl% Uݜ[uPN P =]/?lb\  ]RCX'*H5)J!kUFfkٿm6{qqł*[M5s˫YZTal邾fy-ůJ<|nB`286 Zsv[ZebkuC*ԻSVm#eWc{hrtJhF$Ak%[ῑ?Y@c(5 -*Y2)ÔNĤFjvT~D@ EAf ziToakDh:>`kY!h Uh AbkػҪ=cT\ >.Ƕ=ɍOn~|_R2|=[{/Jְ]2_9gQd02(CKE (cs->+92]Y*fQ:AYɟYCYù4o̢e_C0N'LJW.Ueu28!47?wN#VoNdLÇeϛFߗu[kuﶮ4z3Պ⁇ޮ4ɯ7a^$fW|\_mH3Uiq*xV%c9yT7_j[yD.kz[GPBzM[_]^?iWo}{h'\\m h}{oVn ;W􁖅ʺܜ/(֟8֪Bc34d٨ RXjb7[o myYċ?aR"O/-с/Srb\/ָ͑v\= u._ɕ_lWV`_?J>)BUL;$k*QB,LX]NpGL}(;T{v>R܉){ Kĭ&+n18߇ &|YlF|zL6o[[-f?% eŪbȪUFz`$N)Z#Ʉ Pi}j]MImCJ5JkNcg3 p/z6Wrx7 [o䏫UdD`EeCHjRS )JJ/\)7zur7ۓ-zȖ"G:1(^\0U\\ {ZS&v s_p*~.~i-*L`wq>i3j+63 !ksEjj~ ׵^Ah]3ş)z;֑N|X.;o !=g#]q)]:},-~ggߍ7Ld+i Nu{ JI]Oޅث|&%JKBVIN29ՂWTk,KիRMh)p.~~ Oo7Flz3Zݗõ}BΆB-b9?N7.¹T y[Uwnq_' Һu!\Eqn[71X%;u ~XɄI|B?7(y}\QѺ:7;mӰ1p(D3EL/63@ٻea>Qyn~2k+,>B Q _R ?Jyhu}{P¶Hd C"yYh#Yz4ËJ(%lT?V lTOxf#컈&bbZrfo)TS63-C%Zq7 4/`5ٸf?=cmYj&ɤ u#l. B5/A?W秮o9TR8󒍙b cF;D<W}s>#F q$ ojMHp/_ 9Ϸ5X<'ʋ_(CFi'u&#i/PL !OWwb#{ӪL| ƱAVe&|,`ފ/&뤷'Jbd>~q.g0~:{0* ** z:RXcY(F"RDSGy+rOf̧0*sD%yi}HLdy J90Օpxߔ稼·[Xߢ> |:0i$̖, c%/[wR̈́ıIE;I2,Lf J Όfcl* "-ޒHޒD1GȔ2q>U} 9J=F #/& ex>9ʣdyxo&1gDs]Dhl^(n2@)QM)G}K+ͧxS:־^h}FAmNpU g ATTz5C#C.4t~­rnނXe +y3LNDN/oZU'\&\.T.[ EQKu'VZL&|+I Ҥ{d-bJpI@#a|o~p>YInb ௙{9T9VZ&fBk zDŽiFKNMj%kZL6 w_'kXDTȌbwc#;=V~ƲpHT݄kr-%þPr=hiq 1G)m ꥑ@Iu qR8el082a(",e^f:5U^8Kҝpvu:N9 ˀh4F\F_Y\Y|^Y3_v knkP1YWR\loV9xۦ }DTQ?}9;MykIlmWc-'-}ZߠpZ8՗UAIKb%J $hG#eD$A886%P8Ub%.E̺NEYm/qt!v@<Ӂ_J9hB.uERX{3)!ZF٘ udLu&0d\ᶗuvRR<"{꫌_G j/E\'Q"Ѯ9Ek dCʭO&lI6'{+}]N5aU{ ʍp)P? dRsr}H" .; 8?V6i T1+('9yÜg9RFm1E!rG@O"#xT57hC5y<ܪHv!&2s=F|&#q5 NdtbdL"XGt1/.CK2"T^R[x0`YfSO ̈́W3#hB${g+jaԚGZ,(7{FknVNpx,-feN=BYĝ!<%R2.bzKVH"=N0mE]\q}֪Bf*3,J>wf݊}k ~FC#}VOܲ8E7]|iPBS Y\05 oA+~ ;h#<.߂{q=/V2pR.`ۣDaF3_u=> B`b(!R%o-+Llt8C%;ާ!2 S(\?VzH4EZN7-O՟ ~:EI9Ȅ!u)KsOpww7~Ҙ1, =~&J'q;=:=ji+6,W?ܣGZ(0cdh>s/+ jd> [SZZCjnѤn2:A%QǶZN})?St;T!mpj&"JtN8Rܹ/RQ=ރ{EЎpשux(R3ɷ6ZQ/VkЫ e{F?tpz)p(hӍSM8ej僻 ^r2bbT]ȏ6CkT-Һu!\E*ǹeݤ[[ĨN1XC"Pj0־:#Һu!\E+:LtEcT_V 8wG- ѭ"S\vMq˻tv59\mg;zqv'jp8:_]<%q$!V6Y^/~+ϑLV#BBT6ZGeB)U ʨ~V {Fk5 2ŕC y*ՄW>O -H- jB^/,n ν@N n$2ʐHSpZm:,QNma#,u$]0zp^ $ cFOT(zG}#sY#<!4E%U#:83N0Fp( T-l}p Hװ?Z zvRf-qu GY~UzPȓo94 c bp|)@![clE37^x{r>r( 5].~3z[o7NϸoGx^c<2p '0C̻fW;r'EZ#jC)PdOZ=p\h8\ -3B?-&c)\fbx׾X[u WQ4D hGY rD7|$} u0[" ձ%@Kם`zQM^ҘKkL ^'TG"T;Vρ<d5 ;<\FƬ^r7#1 ,%[BNbR)cQd?[()ɄVza(^fh,e:a=ύ^h˼enRԦNSYRg0ąvk" r0FY]R܀(nTm'77& \kQd('  "eSJbSToK3/Fb|n[ ݵ2C^cfr46Lzh]3*\,ǶJR=mMTU=If`RGʩ6\;?fmڴszx6^Ҭr {wn3Y>=ͤE$m&bOm&=Rc4z*Azm TyAޔmkv1VpsU btvLJ[`I-^đ Zp~{Acz̜ĞL!/Z`JR  /YԥShU>0iU4`l*k}* [uW L!J$J9~ǻ} )`6tPޭ ra #Ld:~ëE$n0k!ZÔc966츁~.oc)Mdr,C] \r]XJ8d{Hvc9ф ҭt.S}n= `R>I@2Q0i%Ux'{ >ж4nAU y\,t" +r\{B;szxP?kY~^2\h{;瀖Onf1:(rb/0t8+e?$\R#?VW倶 zU>tЀ?S#|ٜ`xtݖ{rtvt/ [.o.n/+L?BPR+IF>RW3d?=+ ₖej/54Lu-vH!Ձl7|R֦ K!S@+hɯ;ލ -T. w/6л5:z.!)N#]V-ӻ :n#"dd8Lh:@+h SSE)a7n·?]ИilsjPbLJ^ԥfʼnJZRI&CR/Rs GR渐]5 2BjRPz(+>/V|"sZPz(eel 8KؑZPz(RvŇݢJ l.O(=jtC)".l(=$.5Q ,J /Rj '7Jʝsި$P ;Q \Pj+:)KK ZF)07=yIJK5Pz(e X񹘆/e =QB>g퍊MHPZH=ťnjR N9KauV:IPz( TwO&CR/vtu(R(Rp(BjFɗ7JvC)v$JIKJHPyDV|n(RSx;f Ԋ>W=M!Kj耻LKeӆ_1_dk̸I$(LgMS<Լ-SIF(a9ˢ(Qj ѹ}n`T uS\?zىfїYd]v`Ai&LI{{"WlƴyzޗgjPO?xϥ+ X{%L qUV/]Wg"ǟߞg[o8phbiBg_.D2.+Tl83_SԲ:7!MmG6kz#ox&6ЧkڣhԖp5@g̩sn^L608MU;9Jn;iIZvsc9K!+{ ibT9lOyv,jey|u5P4y8 &k'~$?6`眴o~EqZs67S$=8d:I*pc]{ C8<%LQ7u}2; l 5M&MO& %؛j'-4ASQie'-K,ZzݜV&߯U͟n4,hq*g5cOg]]?3fr-ֲ͓\[ \Ӄ[YC}J?dlm6- 6g{ae׻>D&@ +g%?%0ixZwן|gsG9R oO@+&졯7rI".} vN`.d(Nm|mBN58L./LKVm#uȦz0ȞOkMɄ6q |-5{✁V!.)c$NTEf O(yqY=2JcF12M|uo -kɶSt\r6С&LwȎ4M3AT I"RX4UBHHcE1qq%9QrT3 51xXLeJ$$E$`p`6z|1XuiG9ŕ9lLI̅SOx3Er $M%b`LU,&A+2H P#')>6񹢚) \6B~]h͏|=D\ԜHJ'F:DsTH%rD'̈́Р5s>:'g*cO(R![)E 0F~cP CxeyqE[ Z=q#dtGq f!qMYHuOtnF(/is (c[p0j;jϑ(Xp-4}U{£'vԡ&Tv{dCPH)7D`i[@PNἮD(=%bַN7=!ʡeĆc\MP在鱡'ON͚c's n1ކa#}#{g,F~LS$Yv@O:Q(.+F@o%(?mtTΑJ6H8 >GWJO.tW VCu.oh plve@ sbG_Ŀ{C.JXܝ\^JJZj`=IU{DٔL~!T6I3Q*#A@X-`lץ`r~/SQW.8zES.4?9O9YJP|Y>U]>lQx˕|E +Sq2P=>_ZYm`Ɵ2sAFH#EhB4㹁2E?h&.+"{SYk/%dӞ.lSֺ| d&nrF [3>PCtzCgo0aE"U R 9LƊ yb ($gh'qjX* 9?evMm,wW˥BXoX!6T;/X _A-۶2[e_o̲*3JnLufw9lfw_ޚuVY(>ltY?淅5߮o>+kndfB^,`QEy5T&͔W(xR9hRRER@ol\ښ,Py-şr7뗽Hb!أ5zQΨj <2Y>wfI"Ц*7IHw8$ym(<ї=؜eg9i]YN3$Ru)}T.S [;ljPH'60*y?ĂV #B {yf p_ραz+Jt]TS]J!2WT?E#ҸS:V”TKQ}٤Z׷NVz|V V4rS_c6'{Tp6ȓSSmjmO')0u'חzdG7Z7 R%G\\\LB/&KǠP(ŝ5-2"Jd /ϴEi%4g|#{FrPc |3Vج,sf׹Ac g%=Όqq*WȮ"nږ\H [GϷƳG3hq1<=wM(biC| 8wn<ЍkݨwBn~ &j2e\kߩͧFWmd:@ C>FÐ]/ S \.edN3]`a,8hU&*F7VTS8:Jo~H+EgpVyTWVx.XRSXOej@< RUgqV&Bsqң8&JMJT :ح## HZ3 +Ȉ_Q-AJud)/5SS}٤Z~RqVʫh1 +gբ~{JJRzV*tVT+-Oӑ[RZž 7NS^yrx&Z@sGVf.BuE;<P2wpvL0'&J{,Zwyݠ w^C_6wW*J}@Avy]|2{^~[ݟZdF6H@h?B &+EHZ^rr)rM&2 2LEd+BF{D#HFP.1fc)*3fVs<`is_s2'fVr2_0 nE1*P[t {U.tsp/8I3*% >U?} 0^~'QK!`93x||bt╵;0JR^eEO5@fAb!2 PHd"'ʽo+ ~%yUZWuDeS^s>?7Y$fۯO+>/DC #%Nn՗M8c]*3re}DP[]$PFj81g@`Q).]̵ [x>"s[}zuԏP n'k4Fszx!8psI4&ѵ^04j5c Q5RcfrAYI~{I24VεI&hAמj0-D>I5PFÉetn=9F  ~zbL] mZ$+%Gdw;z#A! sK-gt;fUXj9 ܖ)1вVbZLs Y(VDi2J7E{G# Zi20)3$RQ8\Z6Dh*paC:j W\2 ܃F9l&HK㍑#4yFA_\eHȔܼ jJ`˜, |:Rs>X0|is2svwR֎\\JJ]Q"R30@F\(ť( S\$sjʩBð^7^"}pٟh:RYj9xn3O~vZ^_R4\ޟj ύӪ/gɳٲm>Qb Oa31w4o,y U+X+VT.Ξp=v֑}fFMd$d2x/]=pyKe?r#k&~>}Zu6em˦hgt&ުZ-8RYx\%.*55JdAB^gCsF;M̍|,sx.ZHK=m޷UAf٤ɢh"jwO,J-;;S9H55H`jV_<@9)j/:敌9 ʇ:yHmX[ E4)):fpNK>hLTG!ǛC@& {8`AGJX`c^ݧ&@`fjVCZКj%R%Z~TK4@,JaLd{-1);7lSx F4.a(1{ B2dzܨПn(F4 #q*b] N]PUR;HљI#Pᭊd z1=xuHr-JslK2d~eu!YUҙDullS(VS(Y/6DSQ?:Hɂ\:(!:j?y<ZoZԺw^$UuRbq޾TeI{ɴ6y )J 1l;nƎms0+GP/U6gеIP>s$MX4mDeQq-N&(j/ʦm^2N׋Oa8 ^R/ڧ7`p+q`%eDLl%sBKʜ(&!&iXK ,Co|r uDGO6Dt{$JlQZ2݃ e΅FSޖyyJŔK F0RDmFr9] Ŏ PXѼFGcX"g=0&Ko){)y% =2D >T1qTm}P(aQZ`>J>JKo"w[ i5ќ i|G ZDu(8o]QZ cx-6ZF :n|({V )Mp!=yGh5dIt'2xxzK.2F8%#n˦HX-V"38O!`׼Tr +j樹/6C=N Cn6Pa|JEbn۪쿳9Z ;Ey0;8XA6 Ơ;\Z?^NjM_>P9q/i) ݁?^ˉW>7r~%bsS`#^S}P>81{CʕY847D55Nt*{?DT;έydS;Tc#8%qO,~pq(zOIAcsʞ͹}5`8FzM-- \Zܶbw8#+PN.߼}Cɚ>_tUoMr.>&d(!e}( 'q6 XnZͮ 7FkNjP[OD>^-lSi߀BxcY0M9sF(%=oL&<sȘ;&gM+Et.F3D^WGOY8OA zɀ98rQej>ôq?ƈ;Y?re"iԸh[?/g_׹/Tmlc;dB#0J|] >nY.-eIr"QAFp8 &#)J#l8~o9܈D* !jfQY3 CqhM%wrD nv˔g6 )ЩN-,8 X(J\@{z铑hYԨr-n* VvsyYRVJZ=Y4{T!ۍmqK9*.6䄄ʹPmqk9Jj]ݽ穉PBK=nc 驪&@Mu~z Ku~HucT7q z#?hOZvrYc8\Frq}1[CsbRP֟D1b>M5@$-~Qs;urپU$-!}oĴ H8az:m=_=N~ѝLh$_@ O=:B4hjDKk&4 <4&LkU!cB x 7) H!>}!P?GE];̻Zd!'xis^c4!ȭXsN)@,K ``Ԇ[D'#-tE׈8m?ҁ`Ʒ^-MZܷI\Xbl^9B)`'LQh+F+lT:DgK¾ؗReV9t}4 W b^#aaX˽1OnVѮ!g qsi=c0@9h^lvuywDtd/SFjLP").}-r~W1!RӻsPr*95H62h j2)"|Dm f"矦70H!@{  ;3nM(N[,`rSGr>dD샜{ t!C{1|z:͘37`FCip)C `ؗRgE^݊jQ|8 dy^B$ylEJEs.Ee"b f>OC"pLguhfy :>:8?Jp||?\\3 ր˲@+r\TRvP MԾ?-DWd1z~GBysaBdNҡ NGcX590yzS=(T2Egd@{͹NYKVIc vwۺPǢcb^}15D ٻĄgFsjDNSf/@X5#ܱn=!pv=^j}âNE`!rCU6ժ@ J8B-r[q!KdaEOf5f3,jykogd6}@IKGN\m&c/sIT;-NL'?ugޥн4UKt xznM`f1j e{zWIHǫtryMC1%}w6*8qcEyetˏ׳Z$So(q%ˍ՜YS AKAeB÷EVڳR_EG+9{Ґtʊ'߷nZSnC1hQcݎ%C2 gۺ oqh!/\ET'c9=&TO ŠEuu;b^4䅫uJH*zG7U#׾=UY'a AGp#Qt-GErK?Auko5\1ᔮKtSeU~>Ͼ'W?}O^KQ[igիi-Ox~]ղQ Gio}|6վ{;q.]o?Nn.տ:x?oV^'sZ(7p=~д4M+@.@PqņeLxHNe%K/s]\P>c5w4N\wێj72ZJɷs1 sz.Mdx-["S8S ۇ&dhk1A;N[9w_\%0L`:|(JQDhM~,_zYAfm߯66 =~bi_i dڼvf/YmW>FKVo,3X2/2sUV}ۤon#\BQdU-#dIDQ9x][s7+,=k U~ȱyNj7Nmbb)HHޭ )i4h gH%!!/@;8ƌ!S1 VOwfj)uл9I.*4njHG"辈8]Ǻy2ތÚuXP2}؛?ᷓN瓢2=>j|?ɦgWf.>z7?Ɠ=;E/@Xw|UXPx¥rM.Ԝ%+qŚz" Ly+dm?$$]ipI3U Г0éRz*#{Ճ Ҡ6,Dk2\y$3TaYcaS=xB) {@l7 A*L]lKc ("ҔHX7eB@+[c:\ M-P8np[ZAVkRѕXZMٓ1):Ȧ(Aojm/ iAp+sMS-HYBQ(+yzƼKuG&zߖeI+[u}oSå[z;kjA4 qH2kp  Ah}bR rt/a)i*ɒ$dMeW rqڦdWN4a ’LP)>]|cH ]$(} "|PA5-B>}PZ\R"6L ? Nz|bܻ rkk@-w~cr~wH7Xo͕0].5Bl SZZ{mÃJZg:^q\@ ~  =Bʼn7^K0),UDyD` 37F6F=^[Ϗ#aMo$o=qaI5>tja#ǣ3`,FW-{t(6ũHa3Fn񩻺}C|rKo aN29?}x^#a- f;ʌq)'8vt6Mϼ"!}k:,v/SXoFwWǬbĩtZ.j$9JtZYt#1y(]GJ5!,׃R9$G?~^HVk3+K'z>\6) Xs?3bɣfHJyV'eǙOr* _ecdWjXއO(,NϿh7V/Tz6R*4ޡJ6gJ!QfJ)A)0B-p-eĭF`&vA 8h&R9@JӁ[Y <LE5fb\ޏJ#AJ77^d7nCԥ#pU?DŽ嶃  U&Z}aeJnrlqܺ.GEH̐#:20hjɽpƽ{V\(E ;,Xpe<eHbwjJ(=m!IQϼ%+vpߛQDT?y|>C+%}JCh5W@4?Iōazu"Qט,Fn{}C#h!Z'p1{N& JMnĤIUv`!(_9i3*\[ByndiJ,\?_$-HRI^a  e"|cY"8PuN4sa J<)BF$t/Ȫ_i <|wh wMgw׌d1d<苀>鍗|4 cEV$1x/'lf>&z'UR.0*$$Fa+$Kܽ Jw2|tٷVZ<(&<L\C|bL,NI,wv΁_ܰ5 [شr EXݍZK7ARՆUx"A4p\- CtcL@wȪrG|T h+]9iOp/xl*t| BKnnڌ]Wm7ev[||˻Aj;>pzKTJ(6uTP:QHJ)2JICiA5p#?lՌE@)i(]PJM (=DLC)] 3J|czRǕ $zGB4jf/--c,SZ5/ǂ[ ,~Ru_/A*m-=qCՅ4p>hiA\var/home/core/zuul-output/logs/kubelet.log0000644000000000000000004274046215136654736017726 0ustar rootrootJan 29 12:04:27 crc systemd[1]: Starting Kubernetes Kubelet... Jan 29 12:04:27 crc restorecon[4687]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:27 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:04:28 crc restorecon[4687]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 29 12:04:28 crc restorecon[4687]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 29 12:04:28 crc kubenswrapper[4840]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:28 crc kubenswrapper[4840]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 29 12:04:28 crc kubenswrapper[4840]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:28 crc kubenswrapper[4840]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:28 crc kubenswrapper[4840]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:04:28 crc kubenswrapper[4840]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.780285 4840 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785219 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785247 4840 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785258 4840 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785267 4840 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785275 4840 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785284 4840 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785291 4840 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785299 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785319 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785327 4840 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785338 4840 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785348 4840 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785357 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785366 4840 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785375 4840 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785384 4840 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785393 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785401 4840 feature_gate.go:330] unrecognized feature gate: Example Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785409 4840 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785418 4840 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785426 4840 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785433 4840 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785441 4840 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785448 4840 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785456 4840 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785463 4840 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785471 4840 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785479 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785486 4840 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785494 4840 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785501 4840 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785509 4840 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785517 4840 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785524 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785532 4840 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785540 4840 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785548 4840 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785556 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785563 4840 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785571 4840 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785578 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785586 4840 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785594 4840 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785602 4840 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785610 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785617 4840 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785625 4840 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785633 4840 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785641 4840 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785648 4840 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785656 4840 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785663 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785672 4840 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785681 4840 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785688 4840 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785696 4840 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785703 4840 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785715 4840 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785725 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785733 4840 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785741 4840 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785750 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785758 4840 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785769 4840 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785780 4840 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785791 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785799 4840 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785810 4840 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785819 4840 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785827 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.785835 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787034 4840 flags.go:64] FLAG: --address="0.0.0.0" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787070 4840 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787089 4840 flags.go:64] FLAG: --anonymous-auth="true" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787101 4840 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787113 4840 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787122 4840 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787134 4840 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787144 4840 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787154 4840 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787165 4840 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787174 4840 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787184 4840 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787193 4840 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787202 4840 flags.go:64] FLAG: --cgroup-root="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787210 4840 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787219 4840 flags.go:64] FLAG: --client-ca-file="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787228 4840 flags.go:64] FLAG: --cloud-config="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787236 4840 flags.go:64] FLAG: --cloud-provider="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787245 4840 flags.go:64] FLAG: --cluster-dns="[]" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787255 4840 flags.go:64] FLAG: --cluster-domain="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787264 4840 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787273 4840 flags.go:64] FLAG: --config-dir="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787281 4840 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787291 4840 flags.go:64] FLAG: --container-log-max-files="5" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787301 4840 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787310 4840 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787319 4840 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787328 4840 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787337 4840 flags.go:64] FLAG: --contention-profiling="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787346 4840 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787355 4840 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787365 4840 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787374 4840 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787385 4840 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787394 4840 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787403 4840 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787411 4840 flags.go:64] FLAG: --enable-load-reader="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787420 4840 flags.go:64] FLAG: --enable-server="true" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787429 4840 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787441 4840 flags.go:64] FLAG: --event-burst="100" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787450 4840 flags.go:64] FLAG: --event-qps="50" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787458 4840 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787467 4840 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787476 4840 flags.go:64] FLAG: --eviction-hard="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787486 4840 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787504 4840 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787513 4840 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787522 4840 flags.go:64] FLAG: --eviction-soft="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787531 4840 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787540 4840 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787549 4840 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787558 4840 flags.go:64] FLAG: --experimental-mounter-path="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787567 4840 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787575 4840 flags.go:64] FLAG: --fail-swap-on="true" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787584 4840 flags.go:64] FLAG: --feature-gates="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787605 4840 flags.go:64] FLAG: --file-check-frequency="20s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787614 4840 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787623 4840 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787632 4840 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787641 4840 flags.go:64] FLAG: --healthz-port="10248" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787650 4840 flags.go:64] FLAG: --help="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787659 4840 flags.go:64] FLAG: --hostname-override="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787667 4840 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787676 4840 flags.go:64] FLAG: --http-check-frequency="20s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787686 4840 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787695 4840 flags.go:64] FLAG: --image-credential-provider-config="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787703 4840 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787712 4840 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787721 4840 flags.go:64] FLAG: --image-service-endpoint="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787730 4840 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787739 4840 flags.go:64] FLAG: --kube-api-burst="100" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787748 4840 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787758 4840 flags.go:64] FLAG: --kube-api-qps="50" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787767 4840 flags.go:64] FLAG: --kube-reserved="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787776 4840 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787784 4840 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787794 4840 flags.go:64] FLAG: --kubelet-cgroups="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787802 4840 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787811 4840 flags.go:64] FLAG: --lock-file="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787820 4840 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787829 4840 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787843 4840 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787857 4840 flags.go:64] FLAG: --log-json-split-stream="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787865 4840 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787874 4840 flags.go:64] FLAG: --log-text-split-stream="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787883 4840 flags.go:64] FLAG: --logging-format="text" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787892 4840 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787901 4840 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787910 4840 flags.go:64] FLAG: --manifest-url="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787918 4840 flags.go:64] FLAG: --manifest-url-header="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787929 4840 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787938 4840 flags.go:64] FLAG: --max-open-files="1000000" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787984 4840 flags.go:64] FLAG: --max-pods="110" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.787994 4840 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788002 4840 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788011 4840 flags.go:64] FLAG: --memory-manager-policy="None" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788020 4840 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788030 4840 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788039 4840 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788047 4840 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788071 4840 flags.go:64] FLAG: --node-status-max-images="50" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788080 4840 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788127 4840 flags.go:64] FLAG: --oom-score-adj="-999" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788136 4840 flags.go:64] FLAG: --pod-cidr="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788145 4840 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788158 4840 flags.go:64] FLAG: --pod-manifest-path="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788167 4840 flags.go:64] FLAG: --pod-max-pids="-1" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788176 4840 flags.go:64] FLAG: --pods-per-core="0" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788185 4840 flags.go:64] FLAG: --port="10250" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788194 4840 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788203 4840 flags.go:64] FLAG: --provider-id="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788211 4840 flags.go:64] FLAG: --qos-reserved="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788221 4840 flags.go:64] FLAG: --read-only-port="10255" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788230 4840 flags.go:64] FLAG: --register-node="true" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788239 4840 flags.go:64] FLAG: --register-schedulable="true" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788247 4840 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788261 4840 flags.go:64] FLAG: --registry-burst="10" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788271 4840 flags.go:64] FLAG: --registry-qps="5" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788280 4840 flags.go:64] FLAG: --reserved-cpus="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788289 4840 flags.go:64] FLAG: --reserved-memory="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788300 4840 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788308 4840 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788318 4840 flags.go:64] FLAG: --rotate-certificates="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788327 4840 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788339 4840 flags.go:64] FLAG: --runonce="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788349 4840 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788360 4840 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788372 4840 flags.go:64] FLAG: --seccomp-default="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788383 4840 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788394 4840 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788405 4840 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788416 4840 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788429 4840 flags.go:64] FLAG: --storage-driver-password="root" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788440 4840 flags.go:64] FLAG: --storage-driver-secure="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788451 4840 flags.go:64] FLAG: --storage-driver-table="stats" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788463 4840 flags.go:64] FLAG: --storage-driver-user="root" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788473 4840 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788485 4840 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788497 4840 flags.go:64] FLAG: --system-cgroups="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788507 4840 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788525 4840 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788536 4840 flags.go:64] FLAG: --tls-cert-file="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788546 4840 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788560 4840 flags.go:64] FLAG: --tls-min-version="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788571 4840 flags.go:64] FLAG: --tls-private-key-file="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788582 4840 flags.go:64] FLAG: --topology-manager-policy="none" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788593 4840 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788604 4840 flags.go:64] FLAG: --topology-manager-scope="container" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788616 4840 flags.go:64] FLAG: --v="2" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788631 4840 flags.go:64] FLAG: --version="false" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788645 4840 flags.go:64] FLAG: --vmodule="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788659 4840 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.788671 4840 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.788907 4840 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.788919 4840 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.788927 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.788936 4840 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.788953 4840 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.788999 4840 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789015 4840 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789027 4840 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789038 4840 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789048 4840 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789058 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789070 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789081 4840 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789090 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789099 4840 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789109 4840 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789121 4840 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789132 4840 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789142 4840 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789152 4840 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789163 4840 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789173 4840 feature_gate.go:330] unrecognized feature gate: Example Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789184 4840 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789194 4840 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789209 4840 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789222 4840 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789234 4840 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789246 4840 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789257 4840 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789266 4840 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789279 4840 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789292 4840 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789305 4840 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789315 4840 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789324 4840 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789334 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789344 4840 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789354 4840 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789364 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789373 4840 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789382 4840 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789391 4840 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789401 4840 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789411 4840 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789421 4840 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789430 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789439 4840 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789449 4840 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789460 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789470 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789479 4840 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789489 4840 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789498 4840 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789509 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789518 4840 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789528 4840 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789540 4840 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789551 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789561 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789571 4840 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789581 4840 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789590 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789600 4840 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789610 4840 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789620 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789630 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789639 4840 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789648 4840 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789658 4840 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789668 4840 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.789677 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.789707 4840 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.803117 4840 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.803153 4840 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803218 4840 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803225 4840 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803230 4840 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803234 4840 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803238 4840 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803242 4840 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803246 4840 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803249 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803253 4840 feature_gate.go:330] unrecognized feature gate: Example Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803257 4840 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803260 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803264 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803268 4840 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803272 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803275 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803279 4840 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803284 4840 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803291 4840 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803295 4840 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803299 4840 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803303 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803307 4840 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803310 4840 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803314 4840 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803318 4840 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803321 4840 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803326 4840 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803330 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803334 4840 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803337 4840 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803341 4840 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803345 4840 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803348 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803352 4840 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803355 4840 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803359 4840 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803362 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803366 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803369 4840 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803372 4840 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803376 4840 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803380 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803383 4840 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803386 4840 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803391 4840 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803396 4840 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803400 4840 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803403 4840 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803408 4840 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803412 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803415 4840 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803419 4840 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803423 4840 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803427 4840 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803431 4840 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803435 4840 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803439 4840 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803443 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803446 4840 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803450 4840 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803454 4840 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803457 4840 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803461 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803464 4840 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803468 4840 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803471 4840 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803475 4840 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803478 4840 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803482 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803485 4840 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.803488 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.803495 4840 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.806901 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.806978 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.806990 4840 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.806999 4840 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807007 4840 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807016 4840 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807025 4840 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807033 4840 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807041 4840 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807051 4840 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807059 4840 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807067 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807075 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807084 4840 feature_gate.go:330] unrecognized feature gate: Example Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807093 4840 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807104 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807116 4840 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807126 4840 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807135 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807143 4840 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807152 4840 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807159 4840 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807167 4840 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807175 4840 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807183 4840 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807191 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807199 4840 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807207 4840 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807215 4840 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807226 4840 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807236 4840 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807247 4840 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807258 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807267 4840 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807277 4840 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807293 4840 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807302 4840 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807310 4840 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807318 4840 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807326 4840 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807334 4840 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807343 4840 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807350 4840 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807358 4840 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807366 4840 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807374 4840 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807383 4840 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807392 4840 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807400 4840 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807408 4840 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807416 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807424 4840 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807432 4840 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807440 4840 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807450 4840 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807459 4840 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807467 4840 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807475 4840 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807483 4840 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807491 4840 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807499 4840 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807506 4840 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807515 4840 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807523 4840 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807532 4840 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807540 4840 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807548 4840 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807556 4840 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807563 4840 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807571 4840 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.807578 4840 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.807591 4840 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.807846 4840 server.go:940] "Client rotation is on, will bootstrap in background" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.811594 4840 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.811672 4840 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.813094 4840 server.go:997] "Starting client certificate rotation" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.813122 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.813297 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-23 07:19:02.583962751 +0000 UTC Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.813394 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.844528 4840 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 12:04:28 crc kubenswrapper[4840]: E0129 12:04:28.848443 4840 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.849477 4840 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.865735 4840 log.go:25] "Validated CRI v1 runtime API" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.902816 4840 log.go:25] "Validated CRI v1 image API" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.905270 4840 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.909924 4840 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-29-11-42-56-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.909976 4840 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:46 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.923772 4840 manager.go:217] Machine: {Timestamp:2026-01-29 12:04:28.921339537 +0000 UTC m=+0.584319460 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654120448 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:021759b1-2f25-49e1-8fe5-59c6e27efb1d BootID:f6451459-d462-4b03-b6c7-2b939b62ff4d Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:46 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827060224 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:85:29:b5 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:85:29:b5 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:cd:6c:9f Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:92:66:01 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7c:ff:eb Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e3:ad:75 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:32:4a:23:0c:60 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:12:24:36:07:b2:fa Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654120448 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.924201 4840 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.924340 4840 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.925127 4840 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.925377 4840 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.925415 4840 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.925636 4840 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.925649 4840 container_manager_linux.go:303] "Creating device plugin manager" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.926249 4840 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.926289 4840 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.926482 4840 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.926576 4840 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.931187 4840 kubelet.go:418] "Attempting to sync node with API server" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.931214 4840 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.931242 4840 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.931259 4840 kubelet.go:324] "Adding apiserver pod source" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.931275 4840 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.935196 4840 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.936134 4840 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.937452 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:28 crc kubenswrapper[4840]: E0129 12:04:28.937536 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.937511 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:28 crc kubenswrapper[4840]: E0129 12:04:28.937603 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.938378 4840 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.939716 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.939735 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.939742 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.939749 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.939761 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.939768 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.939775 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.939787 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.939794 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.939801 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.939811 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.939818 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.940682 4840 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.941063 4840 server.go:1280] "Started kubelet" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.941264 4840 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.942395 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.942513 4840 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.942852 4840 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:04:28 crc systemd[1]: Started Kubernetes Kubelet. Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.943582 4840 server.go:460] "Adding debug handlers to kubelet server" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.943639 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.943669 4840 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.943703 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 18:50:06.349215503 +0000 UTC Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.943784 4840 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.943796 4840 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.943837 4840 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 12:04:28 crc kubenswrapper[4840]: W0129 12:04:28.944226 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:28 crc kubenswrapper[4840]: E0129 12:04:28.943875 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 12:04:28 crc kubenswrapper[4840]: E0129 12:04:28.944290 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:28 crc kubenswrapper[4840]: E0129 12:04:28.944644 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="200ms" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.952890 4840 factory.go:55] Registering systemd factory Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.952940 4840 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.953970 4840 factory.go:153] Registering CRI-O factory Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954007 4840 factory.go:221] Registration of the crio container factory successfully Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954098 4840 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954126 4840 factory.go:103] Registering Raw factory Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954211 4840 manager.go:1196] Started watching for new ooms in manager Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954648 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954699 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954721 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954735 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954751 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954763 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954776 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954793 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954809 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954826 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954840 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954856 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954869 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954891 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954904 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954918 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.954936 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.955066 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.955090 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.955147 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.955161 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.955207 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.955242 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.955276 4840 manager.go:319] Starting recovery of all containers Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.955277 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957396 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957438 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957456 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957468 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957478 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957490 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957499 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957512 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957525 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957537 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957550 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957565 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957579 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957592 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957605 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957618 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957628 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957637 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957648 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957658 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957666 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957898 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957913 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957923 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957931 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957945 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957971 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957981 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.957996 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958006 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958017 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958026 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958038 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958064 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958073 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958082 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958093 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958105 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958117 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958127 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958136 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958145 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958154 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958164 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958174 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958185 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958194 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958209 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958219 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958231 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958242 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958253 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958265 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958280 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958291 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958302 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958315 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958326 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958338 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958350 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958360 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958380 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958418 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958435 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958471 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958482 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958494 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958508 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958553 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958567 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958578 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958589 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958723 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958739 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958752 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958767 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958778 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958790 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958803 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958816 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958836 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958850 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958863 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958877 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958892 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958908 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958924 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958938 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958954 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958985 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.958999 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959011 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959024 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959035 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959046 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959056 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959066 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959081 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959091 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959101 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959113 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959127 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959138 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959147 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959159 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959169 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959179 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959190 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959200 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959211 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959222 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959232 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959241 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959289 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959301 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959312 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959323 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959335 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959346 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959357 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959367 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959381 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959396 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959418 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959431 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959445 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: E0129 12:04:28.958137 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f321ab967f8c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 12:04:28.941031616 +0000 UTC m=+0.604011509,LastTimestamp:2026-01-29 12:04:28.941031616 +0000 UTC m=+0.604011509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959461 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959520 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959546 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959559 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959570 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959586 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959598 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.959610 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961458 4840 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961492 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961509 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961522 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961538 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961553 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961568 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961583 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961598 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961611 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961626 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961639 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961652 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961665 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961678 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961692 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961704 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961716 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961779 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961795 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961807 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961819 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961833 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961848 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961863 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961878 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961895 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961909 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961923 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961973 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.961991 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962005 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962021 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962035 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962050 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962067 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962082 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962099 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962112 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962128 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962143 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962158 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962175 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962188 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962205 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962222 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962236 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962250 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962264 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962277 4840 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962288 4840 reconstruct.go:97] "Volume reconstruction finished" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.962297 4840 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.990527 4840 manager.go:324] Recovery completed Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.997917 4840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:04:28 crc kubenswrapper[4840]: I0129 12:04:28.999860 4840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:28.999932 4840 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.000020 4840 kubelet.go:2335] "Starting kubelet main sync loop" Jan 29 12:04:29 crc kubenswrapper[4840]: E0129 12:04:29.000123 4840 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.000181 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: W0129 12:04:29.001197 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:29 crc kubenswrapper[4840]: E0129 12:04:29.001289 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.002107 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.002141 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.002154 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.004167 4840 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.004186 4840 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.004233 4840 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.033332 4840 policy_none.go:49] "None policy: Start" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.034853 4840 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.034900 4840 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:04:29 crc kubenswrapper[4840]: E0129 12:04:29.044437 4840 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.085816 4840 manager.go:334] "Starting Device Plugin manager" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.085865 4840 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.085880 4840 server.go:79] "Starting device plugin registration server" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.086688 4840 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.086706 4840 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.086914 4840 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.087061 4840 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.087079 4840 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:04:29 crc kubenswrapper[4840]: E0129 12:04:29.092599 4840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.100298 4840 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.100391 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.101519 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.101555 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.101565 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.101670 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.101997 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.102060 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.103226 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.103256 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.103268 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.103448 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.103469 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.103483 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.103605 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.104047 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.104102 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.105092 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.105132 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.105148 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.105316 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.105481 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.105523 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.105783 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.105830 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.105845 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.106048 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.106073 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.106082 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.106182 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.106261 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.106285 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.106298 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.106473 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.106503 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.106984 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.107012 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.107022 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.107129 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.107149 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.107401 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.107426 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.107438 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.108993 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.109030 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.109043 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: E0129 12:04:29.145476 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="400ms" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.164251 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.164504 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.164623 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.164799 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.164928 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.165092 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.165188 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.165281 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.165389 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.165514 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.165611 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.165707 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.165809 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.165905 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.166029 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.187374 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.188611 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.188653 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.188669 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.188700 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:04:29 crc kubenswrapper[4840]: E0129 12:04:29.189349 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268056 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268110 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268132 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268147 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268162 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268179 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268196 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268210 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268225 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268238 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268253 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268288 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268288 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268349 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268310 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268389 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268401 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268415 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268432 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268454 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268476 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268499 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268521 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268545 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268568 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268575 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268628 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268598 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268602 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.268575 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.390338 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.392509 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.392563 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.392580 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.392641 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:04:29 crc kubenswrapper[4840]: E0129 12:04:29.393173 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.430568 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.443972 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.456362 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.478999 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: W0129 12:04:29.482401 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-44638669e11b5fcfe95f3fd8af332101032c620b9d08c23b78d47b24a67ff5b6 WatchSource:0}: Error finding container 44638669e11b5fcfe95f3fd8af332101032c620b9d08c23b78d47b24a67ff5b6: Status 404 returned error can't find the container with id 44638669e11b5fcfe95f3fd8af332101032c620b9d08c23b78d47b24a67ff5b6 Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.483534 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:04:29 crc kubenswrapper[4840]: W0129 12:04:29.485729 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-b3a98837cf50fd8134424ac7c6b942518921224eb1af11ddc0e958c372114e75 WatchSource:0}: Error finding container b3a98837cf50fd8134424ac7c6b942518921224eb1af11ddc0e958c372114e75: Status 404 returned error can't find the container with id b3a98837cf50fd8134424ac7c6b942518921224eb1af11ddc0e958c372114e75 Jan 29 12:04:29 crc kubenswrapper[4840]: W0129 12:04:29.495153 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-34512d12bca07679a99469b42cfc7c79aeaf1316425ceeaf864797dfa7385ef4 WatchSource:0}: Error finding container 34512d12bca07679a99469b42cfc7c79aeaf1316425ceeaf864797dfa7385ef4: Status 404 returned error can't find the container with id 34512d12bca07679a99469b42cfc7c79aeaf1316425ceeaf864797dfa7385ef4 Jan 29 12:04:29 crc kubenswrapper[4840]: W0129 12:04:29.495883 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-d16d5d9b085c1edf8c3b86d76c6bea690801a53803ad80829c1cbbeb89d47a70 WatchSource:0}: Error finding container d16d5d9b085c1edf8c3b86d76c6bea690801a53803ad80829c1cbbeb89d47a70: Status 404 returned error can't find the container with id d16d5d9b085c1edf8c3b86d76c6bea690801a53803ad80829c1cbbeb89d47a70 Jan 29 12:04:29 crc kubenswrapper[4840]: W0129 12:04:29.505323 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-650444783e46181fe3560b49e4a328bda60cb1215d30b2e191e29d391b4f631b WatchSource:0}: Error finding container 650444783e46181fe3560b49e4a328bda60cb1215d30b2e191e29d391b4f631b: Status 404 returned error can't find the container with id 650444783e46181fe3560b49e4a328bda60cb1215d30b2e191e29d391b4f631b Jan 29 12:04:29 crc kubenswrapper[4840]: E0129 12:04:29.547428 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="800ms" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.793933 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.794814 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.794840 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.794852 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.794873 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:04:29 crc kubenswrapper[4840]: E0129 12:04:29.795250 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 29 12:04:29 crc kubenswrapper[4840]: W0129 12:04:29.851257 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:29 crc kubenswrapper[4840]: E0129 12:04:29.851349 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.943721 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:29 crc kubenswrapper[4840]: I0129 12:04:29.943861 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 07:40:01.731066567 +0000 UTC Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.005548 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"b3a98837cf50fd8134424ac7c6b942518921224eb1af11ddc0e958c372114e75"} Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.006276 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"650444783e46181fe3560b49e4a328bda60cb1215d30b2e191e29d391b4f631b"} Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.006837 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d16d5d9b085c1edf8c3b86d76c6bea690801a53803ad80829c1cbbeb89d47a70"} Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.007385 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"34512d12bca07679a99469b42cfc7c79aeaf1316425ceeaf864797dfa7385ef4"} Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.008002 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"44638669e11b5fcfe95f3fd8af332101032c620b9d08c23b78d47b24a67ff5b6"} Jan 29 12:04:30 crc kubenswrapper[4840]: E0129 12:04:30.348139 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="1.6s" Jan 29 12:04:30 crc kubenswrapper[4840]: W0129 12:04:30.456510 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:30 crc kubenswrapper[4840]: E0129 12:04:30.456592 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:30 crc kubenswrapper[4840]: W0129 12:04:30.508346 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:30 crc kubenswrapper[4840]: E0129 12:04:30.508439 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:30 crc kubenswrapper[4840]: W0129 12:04:30.535719 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:30 crc kubenswrapper[4840]: E0129 12:04:30.535804 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.595622 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.597031 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.597072 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.597086 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.597110 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:04:30 crc kubenswrapper[4840]: E0129 12:04:30.597565 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.943576 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.944591 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 10:44:16.885842581 +0000 UTC Jan 29 12:04:30 crc kubenswrapper[4840]: I0129 12:04:30.967828 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 12:04:30 crc kubenswrapper[4840]: E0129 12:04:30.969194 4840 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.012758 4840 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="977430973c128e7355bde6b153867319dc6f094cbd673e1450c15eaa926ff100" exitCode=0 Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.012839 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"977430973c128e7355bde6b153867319dc6f094cbd673e1450c15eaa926ff100"} Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.012912 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.014045 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.014073 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.014082 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.015475 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92" exitCode=0 Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.015528 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92"} Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.015614 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.016320 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.016339 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.016348 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.017547 4840 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a" exitCode=0 Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.017598 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a"} Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.017679 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.018415 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.018442 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.018453 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.020161 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.020752 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.020775 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.020786 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.020738 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7"} Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.020857 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3"} Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.020878 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b"} Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.020898 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba"} Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.020922 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.022148 4840 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461" exitCode=0 Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.022183 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461"} Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.022225 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.023145 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.023204 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.023227 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.023827 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.023846 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.023854 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:31 crc kubenswrapper[4840]: E0129 12:04:31.381415 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f321ab967f8c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 12:04:28.941031616 +0000 UTC m=+0.604011509,LastTimestamp:2026-01-29 12:04:28.941031616 +0000 UTC m=+0.604011509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.764132 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.943734 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:31 crc kubenswrapper[4840]: I0129 12:04:31.944797 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:10:28.701843051 +0000 UTC Jan 29 12:04:31 crc kubenswrapper[4840]: E0129 12:04:31.949491 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="3.2s" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.028846 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447"} Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.028896 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee"} Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.028906 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478"} Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.028916 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd"} Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.028924 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d"} Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.028937 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.029824 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.029850 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.029860 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.031390 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"5a633ce55ec465156590cd31b8f4f646588898101a5fa52960276e00e63e73c8"} Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.031423 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cafbe8e32846faab26909f99cf3a15e6f463468d5f2059e192973f549bb0c85b"} Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.031437 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"139c25958324e3f49f3ef2b61f0779122b937c6ffbebb33cd4dd408367a72469"} Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.031407 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.032290 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.032311 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.032321 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.033009 4840 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c" exitCode=0 Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.033053 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c"} Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.033140 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.033648 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.033666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.033673 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.036435 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.036710 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.036967 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"53fb6447c8d4f4ab1b5a56a3905468daf655cea27156aa8942c471c784baf825"} Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.037555 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.037572 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.037581 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.037989 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.038001 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.038008 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.198201 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.199115 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.199147 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.199156 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.199177 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:04:32 crc kubenswrapper[4840]: E0129 12:04:32.199552 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.200:6443: connect: connection refused" node="crc" Jan 29 12:04:32 crc kubenswrapper[4840]: W0129 12:04:32.208247 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.200:6443: connect: connection refused Jan 29 12:04:32 crc kubenswrapper[4840]: E0129 12:04:32.208303 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.200:6443: connect: connection refused" logger="UnhandledError" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.920893 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:32 crc kubenswrapper[4840]: I0129 12:04:32.945116 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 03:05:56.145778813 +0000 UTC Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.040820 4840 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d" exitCode=0 Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.040865 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d"} Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.040935 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.040957 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.040998 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.041013 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.041036 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.041132 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.041170 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.041882 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.041918 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.041930 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.042420 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.042451 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.042464 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.042519 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.042564 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.042587 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.042606 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.042628 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.042640 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.042530 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.042671 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.042680 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.289606 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.927560 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:33 crc kubenswrapper[4840]: I0129 12:04:33.945495 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 11:32:34.403065442 +0000 UTC Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.013405 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.019793 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.049084 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf"} Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.049167 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.049186 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2"} Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.049224 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b"} Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.049241 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc"} Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.049255 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566"} Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.049225 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.049132 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.049363 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.050539 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.050571 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.050582 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.050747 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.050807 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.050837 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.051015 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.051049 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.051063 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.236594 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.236776 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.237895 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.237928 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.237938 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.714436 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:34 crc kubenswrapper[4840]: I0129 12:04:34.945739 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:30:44.216116857 +0000 UTC Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.020501 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.051879 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.051977 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.051990 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.052295 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.052968 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.052998 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.053007 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.053451 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.053541 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.053666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.054159 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.054239 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.054254 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.399726 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.401257 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.401294 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.401304 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.401325 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:04:35 crc kubenswrapper[4840]: I0129 12:04:35.945916 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:08:38.304411017 +0000 UTC Jan 29 12:04:36 crc kubenswrapper[4840]: I0129 12:04:36.054065 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:36 crc kubenswrapper[4840]: I0129 12:04:36.055072 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:36 crc kubenswrapper[4840]: I0129 12:04:36.055119 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:36 crc kubenswrapper[4840]: I0129 12:04:36.055130 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:36 crc kubenswrapper[4840]: I0129 12:04:36.290268 4840 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 12:04:36 crc kubenswrapper[4840]: I0129 12:04:36.290349 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 29 12:04:36 crc kubenswrapper[4840]: I0129 12:04:36.359378 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 29 12:04:36 crc kubenswrapper[4840]: I0129 12:04:36.359637 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:36 crc kubenswrapper[4840]: I0129 12:04:36.360915 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:36 crc kubenswrapper[4840]: I0129 12:04:36.360997 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:36 crc kubenswrapper[4840]: I0129 12:04:36.361047 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:36 crc kubenswrapper[4840]: I0129 12:04:36.946785 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 10:19:44.680217225 +0000 UTC Jan 29 12:04:37 crc kubenswrapper[4840]: I0129 12:04:37.468779 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:37 crc kubenswrapper[4840]: I0129 12:04:37.468963 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:37 crc kubenswrapper[4840]: I0129 12:04:37.469987 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:37 crc kubenswrapper[4840]: I0129 12:04:37.470014 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:37 crc kubenswrapper[4840]: I0129 12:04:37.470023 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:37 crc kubenswrapper[4840]: I0129 12:04:37.947661 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 17:32:16.441650987 +0000 UTC Jan 29 12:04:38 crc kubenswrapper[4840]: I0129 12:04:38.948704 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:25:53.328039992 +0000 UTC Jan 29 12:04:39 crc kubenswrapper[4840]: I0129 12:04:39.020513 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 29 12:04:39 crc kubenswrapper[4840]: I0129 12:04:39.020725 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:39 crc kubenswrapper[4840]: I0129 12:04:39.021867 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:39 crc kubenswrapper[4840]: I0129 12:04:39.021923 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:39 crc kubenswrapper[4840]: I0129 12:04:39.021936 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:39 crc kubenswrapper[4840]: E0129 12:04:39.092878 4840 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 29 12:04:39 crc kubenswrapper[4840]: I0129 12:04:39.949514 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 04:08:39.464586004 +0000 UTC Jan 29 12:04:40 crc kubenswrapper[4840]: I0129 12:04:40.950602 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:08:43.247623347 +0000 UTC Jan 29 12:04:41 crc kubenswrapper[4840]: I0129 12:04:41.921547 4840 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 12:04:41 crc kubenswrapper[4840]: I0129 12:04:41.921636 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 12:04:41 crc kubenswrapper[4840]: I0129 12:04:41.951197 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 03:11:49.646906417 +0000 UTC Jan 29 12:04:42 crc kubenswrapper[4840]: W0129 12:04:42.479461 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 12:04:42 crc kubenswrapper[4840]: I0129 12:04:42.479561 4840 trace.go:236] Trace[1728704416]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 12:04:32.477) (total time: 10001ms): Jan 29 12:04:42 crc kubenswrapper[4840]: Trace[1728704416]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:04:42.479) Jan 29 12:04:42 crc kubenswrapper[4840]: Trace[1728704416]: [10.001605152s] [10.001605152s] END Jan 29 12:04:42 crc kubenswrapper[4840]: E0129 12:04:42.479587 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 12:04:42 crc kubenswrapper[4840]: W0129 12:04:42.503138 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 12:04:42 crc kubenswrapper[4840]: I0129 12:04:42.503238 4840 trace.go:236] Trace[2039272460]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 12:04:32.501) (total time: 10001ms): Jan 29 12:04:42 crc kubenswrapper[4840]: Trace[2039272460]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:04:42.503) Jan 29 12:04:42 crc kubenswrapper[4840]: Trace[2039272460]: [10.001528149s] [10.001528149s] END Jan 29 12:04:42 crc kubenswrapper[4840]: E0129 12:04:42.503261 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 12:04:42 crc kubenswrapper[4840]: W0129 12:04:42.851066 4840 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 29 12:04:42 crc kubenswrapper[4840]: I0129 12:04:42.851179 4840 trace.go:236] Trace[1771614390]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 12:04:32.849) (total time: 10001ms): Jan 29 12:04:42 crc kubenswrapper[4840]: Trace[1771614390]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (12:04:42.851) Jan 29 12:04:42 crc kubenswrapper[4840]: Trace[1771614390]: [10.001330964s] [10.001330964s] END Jan 29 12:04:42 crc kubenswrapper[4840]: E0129 12:04:42.851204 4840 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 29 12:04:42 crc kubenswrapper[4840]: I0129 12:04:42.943399 4840 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 29 12:04:42 crc kubenswrapper[4840]: I0129 12:04:42.951624 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 16:11:16.304973532 +0000 UTC Jan 29 12:04:43 crc kubenswrapper[4840]: I0129 12:04:43.555782 4840 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 12:04:43 crc kubenswrapper[4840]: I0129 12:04:43.555869 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 12:04:43 crc kubenswrapper[4840]: I0129 12:04:43.560275 4840 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 29 12:04:43 crc kubenswrapper[4840]: I0129 12:04:43.560326 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 29 12:04:43 crc kubenswrapper[4840]: I0129 12:04:43.934916 4840 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]log ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]etcd ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/priority-and-fairness-filter ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/start-apiextensions-informers ok Jan 29 12:04:43 crc kubenswrapper[4840]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/crd-informer-synced ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/start-system-namespaces-controller ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 29 12:04:43 crc kubenswrapper[4840]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 29 12:04:43 crc kubenswrapper[4840]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/bootstrap-controller ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/start-kube-aggregator-informers ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/apiservice-registration-controller ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/apiservice-discovery-controller ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]autoregister-completion ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/apiservice-openapi-controller ok Jan 29 12:04:43 crc kubenswrapper[4840]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 29 12:04:43 crc kubenswrapper[4840]: livez check failed Jan 29 12:04:43 crc kubenswrapper[4840]: I0129 12:04:43.935008 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:04:43 crc kubenswrapper[4840]: I0129 12:04:43.952408 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 08:27:45.818188811 +0000 UTC Jan 29 12:04:44 crc kubenswrapper[4840]: I0129 12:04:44.718830 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:44 crc kubenswrapper[4840]: I0129 12:04:44.719011 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:44 crc kubenswrapper[4840]: I0129 12:04:44.720566 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:44 crc kubenswrapper[4840]: I0129 12:04:44.720615 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:44 crc kubenswrapper[4840]: I0129 12:04:44.720646 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:44 crc kubenswrapper[4840]: I0129 12:04:44.953065 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 03:42:22.215582731 +0000 UTC Jan 29 12:04:45 crc kubenswrapper[4840]: I0129 12:04:45.953432 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:57:56.280301713 +0000 UTC Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.151349 4840 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.290083 4840 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.290160 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.405170 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.421579 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.940305 4840 apiserver.go:52] "Watching apiserver" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.946044 4840 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.946601 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-etcd/etcd-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.947336 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.947433 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.947566 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:04:46 crc kubenswrapper[4840]: E0129 12:04:46.947859 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:04:46 crc kubenswrapper[4840]: E0129 12:04:46.948008 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.948241 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.948249 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.948648 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:46 crc kubenswrapper[4840]: E0129 12:04:46.948762 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.950357 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.951859 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.951988 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.951871 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.951904 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.951910 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.951976 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.951993 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.952207 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.953692 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 05:56:30.814052342 +0000 UTC Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.962466 4840 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.978823 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:46 crc kubenswrapper[4840]: I0129 12:04:46.998741 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:47 crc kubenswrapper[4840]: I0129 12:04:47.017124 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:47 crc kubenswrapper[4840]: I0129 12:04:47.033222 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:47 crc kubenswrapper[4840]: I0129 12:04:47.044752 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:47 crc kubenswrapper[4840]: I0129 12:04:47.045377 4840 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 12:04:47 crc kubenswrapper[4840]: I0129 12:04:47.054890 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:47 crc kubenswrapper[4840]: I0129 12:04:47.065979 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:47 crc kubenswrapper[4840]: I0129 12:04:47.077159 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:47 crc kubenswrapper[4840]: E0129 12:04:47.090410 4840 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 29 12:04:47 crc kubenswrapper[4840]: I0129 12:04:47.953782 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:14:33.730535861 +0000 UTC Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.001097 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.001221 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.554612 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.564307 4840 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.565447 4840 trace.go:236] Trace[1363742567]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (29-Jan-2026 12:04:38.369) (total time: 10195ms): Jan 29 12:04:48 crc kubenswrapper[4840]: Trace[1363742567]: ---"Objects listed" error: 10195ms (12:04:48.565) Jan 29 12:04:48 crc kubenswrapper[4840]: Trace[1363742567]: [10.195697876s] [10.195697876s] END Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.565469 4840 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.566303 4840 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.567067 4840 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.579333 4840 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35486->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.579423 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:35486->192.168.126.11:17697: read: connection reset by peer" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.668433 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.668791 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.668816 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.668812 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.668838 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.668862 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.668886 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.668911 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.668935 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.668977 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669009 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669040 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669063 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669086 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669106 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669131 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669126 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669165 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669179 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669186 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669238 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669268 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669291 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669314 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669335 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669338 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669359 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669381 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669403 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669428 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669448 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669471 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669496 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669519 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669541 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669563 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669589 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669615 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669639 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669712 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669739 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669739 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669764 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669786 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669811 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669816 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670026 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.669837 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670061 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670086 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670113 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670104 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670141 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670166 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670192 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670201 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670217 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670211 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670249 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670277 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670298 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670313 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670346 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670370 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670395 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670415 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670418 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670464 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670494 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670520 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670547 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670571 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670595 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670618 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670647 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670670 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670694 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670795 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670820 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670842 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670865 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670887 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670914 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670960 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671004 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671029 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671075 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671096 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671118 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671143 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671167 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671189 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671211 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671234 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671257 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671312 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671341 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671365 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671390 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671413 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671439 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671463 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671485 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671508 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671530 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671551 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671576 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671599 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671620 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671641 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671668 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671691 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671716 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671740 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671768 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671793 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671819 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671844 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671869 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671893 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671917 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671960 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.672007 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.672031 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.672054 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.672076 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.672097 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.672120 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.672142 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.672166 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.672187 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.673584 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.677381 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.677620 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.677699 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.677773 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.677874 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.677983 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678077 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678161 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678238 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678313 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678440 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678519 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678598 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678669 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678741 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678808 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678878 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678971 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679045 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679145 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679225 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679336 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679410 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679491 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679570 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679655 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670464 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670473 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670540 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670612 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670623 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670774 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670804 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670815 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670904 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.670908 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671038 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671122 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671126 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671197 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671316 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671359 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671367 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671557 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671623 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671716 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671745 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671747 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671874 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.671888 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.672921 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.673156 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.673199 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.673233 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.673312 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.673477 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.673508 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.673885 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.673841 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.673960 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.674271 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.674315 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.674380 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.677484 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.677618 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.677667 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678081 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678332 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678395 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678276 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678544 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678691 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.678779 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679012 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679071 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679161 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679235 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679364 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679469 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679576 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679520 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679778 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679796 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.679800 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.680081 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.680367 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.680416 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.680430 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.680498 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.680524 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.680615 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.680287 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.680793 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.681012 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.681143 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:04:49.181076284 +0000 UTC m=+20.844056237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688042 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688069 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688092 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688089 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688113 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688135 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688153 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688172 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688144 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688187 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.682267 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.682489 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.682726 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.682888 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.683057 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.683024 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.683212 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.683375 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.683402 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.683474 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.683590 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.683768 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.683795 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.684142 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.684164 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.685586 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.685658 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.685691 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.686039 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.686056 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.686258 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.686269 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.686602 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.686622 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.686917 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.687445 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688240 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688558 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688688 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.688738 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.689029 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.689175 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.689182 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.689234 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.689475 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.689627 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.689658 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.689933 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.690227 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.690235 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.690437 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.690641 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.690905 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.691122 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.691173 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.691186 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.691498 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.694618 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.691706 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.691789 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.692150 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.682116 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.693719 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.693797 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.694217 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.693807 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.694621 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.694192 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.696252 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.696288 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.696306 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.696326 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.696348 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.696610 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.696763 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.696924 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.696977 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.696996 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697209 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697228 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697255 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697274 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697293 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697312 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697330 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697348 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697365 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697388 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697407 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697426 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697442 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697459 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697475 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697495 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697512 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697530 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697546 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697563 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697582 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697601 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697617 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697633 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697650 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697666 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697682 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697697 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697714 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697729 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697745 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697762 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697786 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697801 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697816 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697889 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697906 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697980 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698001 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698019 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698036 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698056 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698075 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698093 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698108 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698141 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698162 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698184 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698199 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698216 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698231 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698299 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698310 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698321 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698330 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698341 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698353 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698365 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698374 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698385 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698394 4840 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698402 4840 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698411 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698420 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698429 4840 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698439 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698448 4840 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698456 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698465 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698474 4840 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698482 4840 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698491 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698499 4840 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698510 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698519 4840 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698529 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698538 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698547 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698556 4840 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698565 4840 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698573 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698582 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698591 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698599 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698608 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698616 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698624 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698633 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698642 4840 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698650 4840 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698658 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698667 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698675 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698683 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698692 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698708 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698716 4840 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698725 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698733 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698741 4840 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698749 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698757 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698768 4840 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698777 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698786 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698794 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698802 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698810 4840 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698818 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698826 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698833 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698844 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698853 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698864 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698872 4840 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698881 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698890 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698898 4840 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698906 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698914 4840 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698922 4840 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698930 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698939 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698963 4840 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698972 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698984 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.698994 4840 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699003 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699012 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699025 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699033 4840 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699041 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699050 4840 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699058 4840 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699072 4840 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699081 4840 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699089 4840 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699097 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699106 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699114 4840 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699123 4840 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699132 4840 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699140 4840 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699148 4840 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699157 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699167 4840 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699175 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699185 4840 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699193 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699202 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699210 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699219 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699228 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699236 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699245 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699253 4840 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699261 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699270 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699278 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699286 4840 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699295 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699303 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699338 4840 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699349 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699358 4840 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699368 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699378 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699387 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699397 4840 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699406 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699416 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699425 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699434 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699444 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699455 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699466 4840 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699475 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699490 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699499 4840 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699510 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699519 4840 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699529 4840 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699538 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699548 4840 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699558 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699568 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699576 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699585 4840 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699595 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699605 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699614 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699623 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699633 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699642 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699650 4840 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699659 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.699668 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.697074 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.700050 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.700053 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.700068 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.700239 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.701177 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.702184 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.702524 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.702877 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.702883 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.702933 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.702939 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.703068 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.703530 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.703722 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.703913 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.704346 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.704391 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.704393 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.704461 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:49.204440079 +0000 UTC m=+20.867419972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.704652 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.704798 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.705280 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.705630 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.705841 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.705880 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.709458 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.710247 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.712095 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.713034 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.713619 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.714597 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.714726 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.714670 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:49.214641738 +0000 UTC m=+20.877621631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.715033 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.715190 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.713137 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.715257 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.715315 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.715510 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.715575 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.715858 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.715890 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.716064 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.716664 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.716915 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.717026 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.717521 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.717867 4840 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.718734 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.718762 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.718781 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.718837 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:49.218821532 +0000 UTC m=+20.881801425 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.718758 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.719037 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.719297 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.719319 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.719331 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.719411 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: E0129 12:04:48.719438 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:49.219420648 +0000 UTC m=+20.882400541 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.719587 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.725251 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.728095 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.728923 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.735094 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.737283 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.738218 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.741705 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.745473 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.772813 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 29 12:04:48 crc kubenswrapper[4840]: W0129 12:04:48.785706 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-1c971ad30df69f7679cd4791a1b1a5655dbd4723da4f7b41c8a21b6669ab9707 WatchSource:0}: Error finding container 1c971ad30df69f7679cd4791a1b1a5655dbd4723da4f7b41c8a21b6669ab9707: Status 404 returned error can't find the container with id 1c971ad30df69f7679cd4791a1b1a5655dbd4723da4f7b41c8a21b6669ab9707 Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800462 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800594 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800671 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800689 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800701 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800737 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800746 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800760 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800769 4840 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800778 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800787 4840 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800796 4840 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800804 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800814 4840 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800823 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800847 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800857 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800871 4840 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800873 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800880 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800962 4840 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800981 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800998 4840 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801011 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801025 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801045 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801059 4840 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801073 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801086 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801128 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801142 4840 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801155 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801169 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801181 4840 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801195 4840 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801207 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801220 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801232 4840 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801245 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801257 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801269 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801283 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801295 4840 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801307 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801321 4840 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801333 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801345 4840 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801356 4840 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801370 4840 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801383 4840 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.801395 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.800683 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.828573 4840 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.932972 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.933504 4840 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.933614 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.937546 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.944632 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.945506 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.954142 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 21:15:52.461527112 +0000 UTC Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.963181 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.974271 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.984177 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:48 crc kubenswrapper[4840]: I0129 12:04:48.993040 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.001240 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.001246 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.001496 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.001647 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.004421 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.005575 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.006311 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.006990 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.007748 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.011891 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.012826 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.013738 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.014898 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.015602 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.016819 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.021167 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.021984 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.023230 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.023839 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.025170 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.026741 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.027974 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.028361 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.029296 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.030409 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.030904 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.031545 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.032758 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.033567 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.034600 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.034685 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.035198 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.036202 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.036780 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.038238 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.039339 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.039916 4840 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.040047 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.042044 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.042495 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.043433 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.044888 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.045494 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.046512 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.047147 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.048172 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.048650 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.049623 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.050217 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.051115 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.051552 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.052484 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.053069 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.053516 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.054271 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.054793 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.055869 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.057038 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.057790 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.058887 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.059434 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.064638 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.064882 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.075448 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: W0129 12:04:49.077505 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-d1a1c505f09c8511d980a9b3089749065e690bb15a1a9cc9f2bb541efd5c84f2 WatchSource:0}: Error finding container d1a1c505f09c8511d980a9b3089749065e690bb15a1a9cc9f2bb541efd5c84f2: Status 404 returned error can't find the container with id d1a1c505f09c8511d980a9b3089749065e690bb15a1a9cc9f2bb541efd5c84f2 Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.079707 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.087504 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.101488 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.104060 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d1a1c505f09c8511d980a9b3089749065e690bb15a1a9cc9f2bb541efd5c84f2"} Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.106156 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f"} Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.106191 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297"} Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.106204 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1c971ad30df69f7679cd4791a1b1a5655dbd4723da4f7b41c8a21b6669ab9707"} Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.108789 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.114415 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.127156 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447" exitCode=255 Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.127204 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447"} Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.135855 4840 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.136155 4840 scope.go:117] "RemoveContainer" containerID="61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.137835 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.156852 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.197033 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.203901 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.204329 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:04:50.204309732 +0000 UTC m=+21.867289625 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.209196 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.226297 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.249583 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.276685 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.305249 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.305283 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.305303 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.305325 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.305436 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.305449 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.305459 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.305507 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:50.305489817 +0000 UTC m=+21.968469700 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.305710 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.305744 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.305757 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.305768 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.305782 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.305759 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:50.305749814 +0000 UTC m=+21.968729707 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.305909 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:50.305885268 +0000 UTC m=+21.968865311 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:49 crc kubenswrapper[4840]: E0129 12:04:49.305926 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:50.305918779 +0000 UTC m=+21.968898882 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.321329 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.333234 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.345748 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 29 12:04:49 crc kubenswrapper[4840]: I0129 12:04:49.955679 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:38:41.740586042 +0000 UTC Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.001251 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.001385 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.130607 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0182ff59702dcb573a7df2cc56059e741568b8e34110d00d599bdd518460ef0c"} Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.132288 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.133799 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd"} Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.134114 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.134993 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324"} Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.154613 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.169111 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.182377 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.202014 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.214852 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.215259 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:04:52.21523926 +0000 UTC m=+23.878219153 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.219128 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.233352 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.251263 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.266489 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.281535 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.294244 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.316249 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.316302 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.316330 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.316352 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.316479 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.316499 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.316494 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.316589 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.316625 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.316512 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.316639 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.316477 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.316640 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:52.316614401 +0000 UTC m=+23.979594294 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.316759 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:52.316731684 +0000 UTC m=+23.979711567 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.316778 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:52.316769285 +0000 UTC m=+23.979749388 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:50 crc kubenswrapper[4840]: E0129 12:04:50.316795 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:52.316785735 +0000 UTC m=+23.979765628 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.317667 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.363388 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.422278 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.452351 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.492300 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.534710 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.598094 4840 csr.go:261] certificate signing request csr-wj9hk is approved, waiting to be issued Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.689032 4840 csr.go:257] certificate signing request csr-wj9hk is issued Jan 29 12:04:50 crc kubenswrapper[4840]: I0129 12:04:50.956293 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 17:20:04.049726973 +0000 UTC Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.000813 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.000819 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.000934 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.001013 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.476167 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-s2v8d"] Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.476477 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.483203 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5d6b5"] Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.483467 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5d6b5" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.491491 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.491845 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.492451 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.498387 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.498488 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.499282 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.506981 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.508163 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.535597 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bbaf604-6946-4bca-96af-be0e5fc92cf3-mcd-auth-proxy-config\") pod \"machine-config-daemon-s2v8d\" (UID: \"8bbaf604-6946-4bca-96af-be0e5fc92cf3\") " pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.535646 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/aa9f54e4-ebb4-467b-92c9-16410e19fbd1-hosts-file\") pod \"node-resolver-5d6b5\" (UID: \"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\") " pod="openshift-dns/node-resolver-5d6b5" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.535686 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8bbaf604-6946-4bca-96af-be0e5fc92cf3-rootfs\") pod \"machine-config-daemon-s2v8d\" (UID: \"8bbaf604-6946-4bca-96af-be0e5fc92cf3\") " pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.535718 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8bbaf604-6946-4bca-96af-be0e5fc92cf3-proxy-tls\") pod \"machine-config-daemon-s2v8d\" (UID: \"8bbaf604-6946-4bca-96af-be0e5fc92cf3\") " pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.535739 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8dpv\" (UniqueName: \"kubernetes.io/projected/8bbaf604-6946-4bca-96af-be0e5fc92cf3-kube-api-access-b8dpv\") pod \"machine-config-daemon-s2v8d\" (UID: \"8bbaf604-6946-4bca-96af-be0e5fc92cf3\") " pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.535759 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsvnr\" (UniqueName: \"kubernetes.io/projected/aa9f54e4-ebb4-467b-92c9-16410e19fbd1-kube-api-access-hsvnr\") pod \"node-resolver-5d6b5\" (UID: \"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\") " pod="openshift-dns/node-resolver-5d6b5" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.553983 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.568905 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.594960 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.616727 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.630664 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.636163 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bbaf604-6946-4bca-96af-be0e5fc92cf3-mcd-auth-proxy-config\") pod \"machine-config-daemon-s2v8d\" (UID: \"8bbaf604-6946-4bca-96af-be0e5fc92cf3\") " pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.636207 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/aa9f54e4-ebb4-467b-92c9-16410e19fbd1-hosts-file\") pod \"node-resolver-5d6b5\" (UID: \"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\") " pod="openshift-dns/node-resolver-5d6b5" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.636269 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8bbaf604-6946-4bca-96af-be0e5fc92cf3-rootfs\") pod \"machine-config-daemon-s2v8d\" (UID: \"8bbaf604-6946-4bca-96af-be0e5fc92cf3\") " pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.636301 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8bbaf604-6946-4bca-96af-be0e5fc92cf3-proxy-tls\") pod \"machine-config-daemon-s2v8d\" (UID: \"8bbaf604-6946-4bca-96af-be0e5fc92cf3\") " pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.636344 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8dpv\" (UniqueName: \"kubernetes.io/projected/8bbaf604-6946-4bca-96af-be0e5fc92cf3-kube-api-access-b8dpv\") pod \"machine-config-daemon-s2v8d\" (UID: \"8bbaf604-6946-4bca-96af-be0e5fc92cf3\") " pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.636366 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsvnr\" (UniqueName: \"kubernetes.io/projected/aa9f54e4-ebb4-467b-92c9-16410e19fbd1-kube-api-access-hsvnr\") pod \"node-resolver-5d6b5\" (UID: \"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\") " pod="openshift-dns/node-resolver-5d6b5" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.637414 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8bbaf604-6946-4bca-96af-be0e5fc92cf3-mcd-auth-proxy-config\") pod \"machine-config-daemon-s2v8d\" (UID: \"8bbaf604-6946-4bca-96af-be0e5fc92cf3\") " pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.637474 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/aa9f54e4-ebb4-467b-92c9-16410e19fbd1-hosts-file\") pod \"node-resolver-5d6b5\" (UID: \"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\") " pod="openshift-dns/node-resolver-5d6b5" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.637498 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/8bbaf604-6946-4bca-96af-be0e5fc92cf3-rootfs\") pod \"machine-config-daemon-s2v8d\" (UID: \"8bbaf604-6946-4bca-96af-be0e5fc92cf3\") " pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.640998 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8bbaf604-6946-4bca-96af-be0e5fc92cf3-proxy-tls\") pod \"machine-config-daemon-s2v8d\" (UID: \"8bbaf604-6946-4bca-96af-be0e5fc92cf3\") " pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.656887 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsvnr\" (UniqueName: \"kubernetes.io/projected/aa9f54e4-ebb4-467b-92c9-16410e19fbd1-kube-api-access-hsvnr\") pod \"node-resolver-5d6b5\" (UID: \"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\") " pod="openshift-dns/node-resolver-5d6b5" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.656999 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8dpv\" (UniqueName: \"kubernetes.io/projected/8bbaf604-6946-4bca-96af-be0e5fc92cf3-kube-api-access-b8dpv\") pod \"machine-config-daemon-s2v8d\" (UID: \"8bbaf604-6946-4bca-96af-be0e5fc92cf3\") " pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.669586 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.689520 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.690480 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-29 11:59:50 +0000 UTC, rotation deadline is 2026-12-20 04:04:59.336370887 +0000 UTC Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.690538 4840 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7792h0m7.645836263s for next certificate rotation Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.708782 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.723696 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.736172 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.750682 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.766017 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.779170 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.786762 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.795469 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5d6b5" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.796418 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.809266 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa9f54e4_ebb4_467b_92c9_16410e19fbd1.slice/crio-76884904f029c94aa8406a850563f4d2dbaf42b98e6fbf74482a1f8531b7973f WatchSource:0}: Error finding container 76884904f029c94aa8406a850563f4d2dbaf42b98e6fbf74482a1f8531b7973f: Status 404 returned error can't find the container with id 76884904f029c94aa8406a850563f4d2dbaf42b98e6fbf74482a1f8531b7973f Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.810633 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.823577 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.845837 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.863680 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.875635 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.914698 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-vztt4"] Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.915364 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-2zc5r"] Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.915601 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2zc5r" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.915905 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.918597 4840 reflector.go:561] object-"openshift-multus"/"default-cni-sysctl-allowlist": failed to list *v1.ConfigMap: configmaps "default-cni-sysctl-allowlist" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.918633 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"default-cni-sysctl-allowlist\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.918690 4840 reflector.go:561] object-"openshift-multus"/"cni-copy-resources": failed to list *v1.ConfigMap: configmaps "cni-copy-resources" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.918701 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"cni-copy-resources\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-copy-resources\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.918774 4840 reflector.go:561] object-"openshift-multus"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.918787 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.918809 4840 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.918835 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.918873 4840 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.918882 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.918920 4840 reflector.go:561] object-"openshift-multus"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.918930 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.919162 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vl4fj"] Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.919252 4840 reflector.go:561] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": failed to list *v1.Secret: secrets "multus-ancillary-tools-dockercfg-vnmsz" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.919274 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-vnmsz\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"multus-ancillary-tools-dockercfg-vnmsz\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.919987 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.926265 4840 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.926297 4840 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.926314 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.926328 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.926371 4840 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.926383 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.926416 4840 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.926451 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.926464 4840 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.926504 4840 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.926508 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: W0129 12:04:51.926463 4840 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.926552 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: E0129 12:04:51.926530 4840 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.935083 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.957209 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 18:15:55.451142992 +0000 UTC Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.970228 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:51 crc kubenswrapper[4840]: I0129 12:04:51.984755 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.000800 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.000977 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.003394 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039441 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4wbn\" (UniqueName: \"kubernetes.io/projected/b331ae03-7000-435b-8cb4-65da0c67d876-kube-api-access-n4wbn\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039490 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-node-log\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039515 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-script-lib\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039537 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-run-multus-certs\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039558 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-cni-netd\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039579 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3a588a45-d664-486f-9135-b0184d00785a-cnibin\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039613 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-slash\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039633 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-system-cni-dir\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039654 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-etc-openvswitch\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039674 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-config\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039693 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a588a45-d664-486f-9135-b0184d00785a-system-cni-dir\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039717 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3a588a45-d664-486f-9135-b0184d00785a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039736 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-os-release\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039754 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-cni-binary-copy\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039784 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-systemd-units\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039803 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-log-socket\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039823 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-run-ovn-kubernetes\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039842 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-env-overrides\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039860 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3a588a45-d664-486f-9135-b0184d00785a-os-release\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039880 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-multus-cni-dir\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039898 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-run-netns\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039916 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-hostroot\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039934 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-multus-conf-dir\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039979 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-var-lib-openvswitch\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.039999 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5zjd\" (UniqueName: \"kubernetes.io/projected/3a588a45-d664-486f-9135-b0184d00785a-kube-api-access-x5zjd\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040017 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-var-lib-kubelet\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040035 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-etc-kubernetes\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040058 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-kubelet\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040073 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-cni-bin\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040090 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-run-k8s-cni-cncf-io\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040107 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-multus-daemon-config\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040136 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040159 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-multus-socket-dir-parent\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040177 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-systemd\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040194 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3a588a45-d664-486f-9135-b0184d00785a-cni-binary-copy\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040214 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-openvswitch\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040229 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-cnibin\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040246 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-var-lib-cni-multus\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040262 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-ovn\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040280 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnntk\" (UniqueName: \"kubernetes.io/projected/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-kube-api-access-rnntk\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040308 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b331ae03-7000-435b-8cb4-65da0c67d876-ovn-node-metrics-cert\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040327 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3a588a45-d664-486f-9135-b0184d00785a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040348 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-var-lib-cni-bin\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.040368 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-run-netns\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.119257 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.135568 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141530 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-env-overrides\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141568 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3a588a45-d664-486f-9135-b0184d00785a-os-release\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141589 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-multus-cni-dir\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141606 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-run-netns\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141629 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-systemd-units\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141646 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-log-socket\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141678 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-run-ovn-kubernetes\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141699 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-hostroot\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141700 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-run-netns\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141720 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-multus-conf-dir\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141727 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-systemd-units\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141746 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-etc-kubernetes\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141795 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-var-lib-openvswitch\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141822 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-multus-conf-dir\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141800 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-etc-kubernetes\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141855 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-var-lib-openvswitch\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141827 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5zjd\" (UniqueName: \"kubernetes.io/projected/3a588a45-d664-486f-9135-b0184d00785a-kube-api-access-x5zjd\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141790 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-hostroot\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141917 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-var-lib-kubelet\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141876 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-log-socket\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141935 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3a588a45-d664-486f-9135-b0184d00785a-os-release\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141962 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-var-lib-kubelet\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.141958 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-run-ovn-kubernetes\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142003 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-kubelet\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142024 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-cni-bin\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142045 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-cni-bin\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142069 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-kubelet\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142068 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-run-k8s-cni-cncf-io\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142091 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-run-k8s-cni-cncf-io\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142095 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-multus-daemon-config\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142155 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142180 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-multus-socket-dir-parent\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142177 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-multus-cni-dir\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142202 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142217 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-systemd\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142256 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3a588a45-d664-486f-9135-b0184d00785a-cni-binary-copy\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142267 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-multus-socket-dir-parent\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142280 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-var-lib-cni-multus\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142270 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-systemd\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142308 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-openvswitch\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142314 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-var-lib-cni-multus\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142330 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-cnibin\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142336 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-openvswitch\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142351 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-ovn\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142366 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-cnibin\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142373 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnntk\" (UniqueName: \"kubernetes.io/projected/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-kube-api-access-rnntk\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142395 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-ovn\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142397 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b331ae03-7000-435b-8cb4-65da0c67d876-ovn-node-metrics-cert\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142428 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3a588a45-d664-486f-9135-b0184d00785a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142466 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-var-lib-cni-bin\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142504 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-run-netns\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142542 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4wbn\" (UniqueName: \"kubernetes.io/projected/b331ae03-7000-435b-8cb4-65da0c67d876-kube-api-access-n4wbn\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142542 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-var-lib-cni-bin\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142564 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-node-log\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142590 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-script-lib\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142630 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-run-multus-certs\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142627 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-node-log\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142650 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-cni-netd\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142604 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-run-netns\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142670 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3a588a45-d664-486f-9135-b0184d00785a-cnibin\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142679 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-host-run-multus-certs\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142701 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-cni-netd\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142731 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-slash\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142711 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-slash\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142745 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3a588a45-d664-486f-9135-b0184d00785a-cnibin\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142771 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-system-cni-dir\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142812 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-os-release\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142835 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-cni-binary-copy\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142856 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-etc-openvswitch\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142872 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-os-release\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142875 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-system-cni-dir\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142881 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-config\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142907 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-etc-openvswitch\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142926 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a588a45-d664-486f-9135-b0184d00785a-system-cni-dir\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.142973 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3a588a45-d664-486f-9135-b0184d00785a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.143001 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3a588a45-d664-486f-9135-b0184d00785a-system-cni-dir\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.143556 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3a588a45-d664-486f-9135-b0184d00785a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.144379 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5"} Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.144414 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b"} Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.144437 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"21f2e4c1217f49963de2312690f2880391cccff65de46116295e4b6df10a56c5"} Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.148495 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5d6b5" event={"ID":"aa9f54e4-ebb4-467b-92c9-16410e19fbd1","Type":"ContainerStarted","Data":"76884904f029c94aa8406a850563f4d2dbaf42b98e6fbf74482a1f8531b7973f"} Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.155716 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.173607 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.186705 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.201768 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.216097 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.233751 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.243315 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.243846 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:04:56.243475571 +0000 UTC m=+27.906455464 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.252763 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.267601 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.283312 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.301468 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.316833 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.341789 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.344746 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.344817 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.344851 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.344899 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.344961 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.344963 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.345011 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.345018 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.345044 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.345068 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:56.345052736 +0000 UTC m=+28.008032629 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.345089 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:56.345080517 +0000 UTC m=+28.008060500 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.345093 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.345118 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:56.345100737 +0000 UTC m=+28.008080620 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.345123 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.345139 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:52 crc kubenswrapper[4840]: E0129 12:04:52.345181 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:56.34516862 +0000 UTC m=+28.008148673 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.368465 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.380348 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.398703 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.420121 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.436022 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.460837 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.475881 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.746241 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.821533 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.834827 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.844414 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3a588a45-d664-486f-9135-b0184d00785a-cni-binary-copy\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.844414 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-cni-binary-copy\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.958163 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 20:22:48.120318044 +0000 UTC Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.962582 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 12:04:52 crc kubenswrapper[4840]: I0129 12:04:52.972590 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-env-overrides\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.000555 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.000625 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:53 crc kubenswrapper[4840]: E0129 12:04:53.000695 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:04:53 crc kubenswrapper[4840]: E0129 12:04:53.000741 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.030268 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.033917 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-multus-daemon-config\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.047447 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.073250 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.083874 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3a588a45-d664-486f-9135-b0184d00785a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.089090 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.093645 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-config\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:53 crc kubenswrapper[4840]: E0129 12:04:53.142905 4840 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Jan 29 12:04:53 crc kubenswrapper[4840]: E0129 12:04:53.142985 4840 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:04:53 crc kubenswrapper[4840]: E0129 12:04:53.143012 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b331ae03-7000-435b-8cb4-65da0c67d876-ovn-node-metrics-cert podName:b331ae03-7000-435b-8cb4-65da0c67d876 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:53.642994035 +0000 UTC m=+25.305973928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/b331ae03-7000-435b-8cb4-65da0c67d876-ovn-node-metrics-cert") pod "ovnkube-node-vl4fj" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:04:53 crc kubenswrapper[4840]: E0129 12:04:53.143100 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-script-lib podName:b331ae03-7000-435b-8cb4-65da0c67d876 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:53.643072617 +0000 UTC m=+25.306052690 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-script-lib") pod "ovnkube-node-vl4fj" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876") : failed to sync configmap cache: timed out waiting for the condition Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.154910 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5d6b5" event={"ID":"aa9f54e4-ebb4-467b-92c9-16410e19fbd1","Type":"ContainerStarted","Data":"ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630"} Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.156499 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65"} Jan 29 12:04:53 crc kubenswrapper[4840]: E0129 12:04:53.162380 4840 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:04:53 crc kubenswrapper[4840]: E0129 12:04:53.164580 4840 projected.go:288] Couldn't get configMap openshift-multus/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.167737 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.181305 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.191784 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.205155 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.220308 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.250694 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.263741 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.273509 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.276602 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.293800 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.295521 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.297621 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.303004 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.309867 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.312411 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.330829 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.348187 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.362141 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.376772 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.378190 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.390251 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4wbn\" (UniqueName: \"kubernetes.io/projected/b331ae03-7000-435b-8cb4-65da0c67d876-kube-api-access-n4wbn\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.394723 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.409002 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.423697 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.435517 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.453125 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.460849 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.464638 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.466650 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.468426 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 12:04:53 crc kubenswrapper[4840]: E0129 12:04:53.472719 4840 projected.go:194] Error preparing data for projected volume kube-api-access-rnntk for pod openshift-multus/multus-2zc5r: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:04:53 crc kubenswrapper[4840]: E0129 12:04:53.472797 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-kube-api-access-rnntk podName:d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969 nodeName:}" failed. No retries permitted until 2026-01-29 12:04:53.972778705 +0000 UTC m=+25.635758588 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rnntk" (UniqueName: "kubernetes.io/projected/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-kube-api-access-rnntk") pod "multus-2zc5r" (UID: "d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969") : failed to sync configmap cache: timed out waiting for the condition Jan 29 12:04:53 crc kubenswrapper[4840]: E0129 12:04:53.474906 4840 projected.go:194] Error preparing data for projected volume kube-api-access-x5zjd for pod openshift-multus/multus-additional-cni-plugins-vztt4: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:04:53 crc kubenswrapper[4840]: E0129 12:04:53.475012 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3a588a45-d664-486f-9135-b0184d00785a-kube-api-access-x5zjd podName:3a588a45-d664-486f-9135-b0184d00785a nodeName:}" failed. No retries permitted until 2026-01-29 12:04:53.974993075 +0000 UTC m=+25.637972968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x5zjd" (UniqueName: "kubernetes.io/projected/3a588a45-d664-486f-9135-b0184d00785a-kube-api-access-x5zjd") pod "multus-additional-cni-plugins-vztt4" (UID: "3a588a45-d664-486f-9135-b0184d00785a") : failed to sync configmap cache: timed out waiting for the condition Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.477437 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.492413 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.514534 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.527979 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.547181 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.566200 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.580019 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:53Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.657669 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-script-lib\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.657757 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b331ae03-7000-435b-8cb4-65da0c67d876-ovn-node-metrics-cert\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.658432 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-script-lib\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.662006 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b331ae03-7000-435b-8cb4-65da0c67d876-ovn-node-metrics-cert\") pod \"ovnkube-node-vl4fj\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.867690 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:04:53 crc kubenswrapper[4840]: I0129 12:04:53.959414 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 01:13:02.779801634 +0000 UTC Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.000537 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:04:54 crc kubenswrapper[4840]: E0129 12:04:54.000659 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.061202 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5zjd\" (UniqueName: \"kubernetes.io/projected/3a588a45-d664-486f-9135-b0184d00785a-kube-api-access-x5zjd\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.061477 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnntk\" (UniqueName: \"kubernetes.io/projected/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-kube-api-access-rnntk\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.066092 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnntk\" (UniqueName: \"kubernetes.io/projected/d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969-kube-api-access-rnntk\") pod \"multus-2zc5r\" (UID: \"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\") " pod="openshift-multus/multus-2zc5r" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.066154 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5zjd\" (UniqueName: \"kubernetes.io/projected/3a588a45-d664-486f-9135-b0184d00785a-kube-api-access-x5zjd\") pod \"multus-additional-cni-plugins-vztt4\" (UID: \"3a588a45-d664-486f-9135-b0184d00785a\") " pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.154153 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-2zc5r" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.159875 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-vztt4" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.159885 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20" exitCode=0 Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.159927 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20"} Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.159981 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"a6aa2824145345222ab8c76d9dc24b521088c36c516af4c38111d2adc8bdd235"} Jan 29 12:04:54 crc kubenswrapper[4840]: W0129 12:04:54.172768 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a588a45_d664_486f_9135_b0184d00785a.slice/crio-0ff10e84ee20c8ee32b2803c23972492dbf68183d4886c7ae64968a593b7122d WatchSource:0}: Error finding container 0ff10e84ee20c8ee32b2803c23972492dbf68183d4886c7ae64968a593b7122d: Status 404 returned error can't find the container with id 0ff10e84ee20c8ee32b2803c23972492dbf68183d4886c7ae64968a593b7122d Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.179040 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.197176 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.212318 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.225981 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.241244 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.255208 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.267248 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.282538 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.300438 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.330395 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.359919 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.384931 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.403303 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.432056 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.959819 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 08:38:40.594411302 +0000 UTC Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.967305 4840 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.970146 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.970199 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.970209 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.970310 4840 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.979700 4840 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.980087 4840 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.981296 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.981347 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.981360 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.981376 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:54 crc kubenswrapper[4840]: I0129 12:04:54.981389 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:54Z","lastTransitionTime":"2026-01-29T12:04:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:54 crc kubenswrapper[4840]: E0129 12:04:54.996553 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:54Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.000310 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.000348 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.000424 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:55 crc kubenswrapper[4840]: E0129 12:04:55.000504 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.000358 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.000574 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.000590 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.000604 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: E0129 12:04:55.000687 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:04:55 crc kubenswrapper[4840]: E0129 12:04:55.014028 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.017111 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.017148 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.017158 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.017174 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.017184 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: E0129 12:04:55.031468 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.035436 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.035480 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.035496 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.035518 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.035531 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: E0129 12:04:55.047502 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.051426 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.051471 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.051499 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.051518 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.051529 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: E0129 12:04:55.065916 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: E0129 12:04:55.066066 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.068105 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.068142 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.068151 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.068166 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.068176 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.170686 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.170720 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.170729 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.170750 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.170763 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.171751 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.171795 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.171804 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.171815 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.171825 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.171834 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.173328 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zc5r" event={"ID":"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969","Type":"ContainerStarted","Data":"60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.173368 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zc5r" event={"ID":"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969","Type":"ContainerStarted","Data":"4786a80d6010b3ff61de2d9283db24120d8770d8bab53022d65a835648573c47"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.175654 4840 generic.go:334] "Generic (PLEG): container finished" podID="3a588a45-d664-486f-9135-b0184d00785a" containerID="605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732" exitCode=0 Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.175712 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" event={"ID":"3a588a45-d664-486f-9135-b0184d00785a","Type":"ContainerDied","Data":"605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.175741 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" event={"ID":"3a588a45-d664-486f-9135-b0184d00785a","Type":"ContainerStarted","Data":"0ff10e84ee20c8ee32b2803c23972492dbf68183d4886c7ae64968a593b7122d"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.196387 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.211535 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.237002 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.255913 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.272865 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.272892 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.272913 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.272926 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.272935 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.281682 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.293887 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.310534 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.324400 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.349710 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.367339 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.379687 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.379719 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.379728 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.379740 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.379748 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.385557 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.398635 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.410917 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.425978 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.438971 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.452429 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.464407 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.480151 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.481461 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.481498 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.481506 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.481521 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.481530 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.494723 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.507578 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.519514 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.532991 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.547185 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.575300 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.583644 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.583683 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.583693 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.583707 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.583719 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.592416 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.612059 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.625020 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.638564 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:55Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.692058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.692108 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.692121 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.692136 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.692145 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.794442 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.794490 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.794501 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.794518 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.794529 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.896554 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.896619 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.896633 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.896651 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.896662 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.960216 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-16 06:30:14.953062621 +0000 UTC Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.998893 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.998927 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.998935 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.998960 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:55 crc kubenswrapper[4840]: I0129 12:04:55.998969 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:55Z","lastTransitionTime":"2026-01-29T12:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.000209 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.000300 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.100506 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.100554 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.100565 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.100583 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.100594 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:56Z","lastTransitionTime":"2026-01-29T12:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.184863 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" event={"ID":"3a588a45-d664-486f-9135-b0184d00785a","Type":"ContainerStarted","Data":"af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93"} Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.198589 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.202275 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.202303 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.202312 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.202326 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.202334 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:56Z","lastTransitionTime":"2026-01-29T12:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.212460 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.224039 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.236205 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.248971 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.262211 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.274999 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.285773 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.286019 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:05:04.28592412 +0000 UTC m=+35.948904023 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.288643 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.301467 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.305179 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.305508 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.305529 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.305574 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.305593 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:56Z","lastTransitionTime":"2026-01-29T12:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.333757 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.349421 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.368936 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.382983 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.387570 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.387616 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.387649 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.387671 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.387717 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.387779 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.387792 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.387803 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.387869 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.387878 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.387884 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.387922 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.387803 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:04.387781594 +0000 UTC m=+36.050761487 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.387979 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:04.387966269 +0000 UTC m=+36.050946162 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.387992 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:04.387986869 +0000 UTC m=+36.050966972 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:04:56 crc kubenswrapper[4840]: E0129 12:04:56.388001 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:04.387997339 +0000 UTC m=+36.050977232 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.396534 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.408708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.408742 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.408754 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.408769 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.408780 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:56Z","lastTransitionTime":"2026-01-29T12:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.451094 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-7smcd"] Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.451505 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7smcd" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.453394 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.453468 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.454707 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.455909 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.476507 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.511593 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.511628 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.511636 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.511650 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.511658 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:56Z","lastTransitionTime":"2026-01-29T12:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.540634 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.589560 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j8np\" (UniqueName: \"kubernetes.io/projected/8000c1f2-217e-480a-8f12-6eec342bafb1-kube-api-access-7j8np\") pod \"node-ca-7smcd\" (UID: \"8000c1f2-217e-480a-8f12-6eec342bafb1\") " pod="openshift-image-registry/node-ca-7smcd" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.589593 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8000c1f2-217e-480a-8f12-6eec342bafb1-serviceca\") pod \"node-ca-7smcd\" (UID: \"8000c1f2-217e-480a-8f12-6eec342bafb1\") " pod="openshift-image-registry/node-ca-7smcd" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.589642 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8000c1f2-217e-480a-8f12-6eec342bafb1-host\") pod \"node-ca-7smcd\" (UID: \"8000c1f2-217e-480a-8f12-6eec342bafb1\") " pod="openshift-image-registry/node-ca-7smcd" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.610892 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.614387 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.614419 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.614433 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.614449 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.614463 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:56Z","lastTransitionTime":"2026-01-29T12:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.633876 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.647173 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.657877 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.671239 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.683023 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.690379 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8000c1f2-217e-480a-8f12-6eec342bafb1-host\") pod \"node-ca-7smcd\" (UID: \"8000c1f2-217e-480a-8f12-6eec342bafb1\") " pod="openshift-image-registry/node-ca-7smcd" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.690425 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8000c1f2-217e-480a-8f12-6eec342bafb1-serviceca\") pod \"node-ca-7smcd\" (UID: \"8000c1f2-217e-480a-8f12-6eec342bafb1\") " pod="openshift-image-registry/node-ca-7smcd" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.690442 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j8np\" (UniqueName: \"kubernetes.io/projected/8000c1f2-217e-480a-8f12-6eec342bafb1-kube-api-access-7j8np\") pod \"node-ca-7smcd\" (UID: \"8000c1f2-217e-480a-8f12-6eec342bafb1\") " pod="openshift-image-registry/node-ca-7smcd" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.690529 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8000c1f2-217e-480a-8f12-6eec342bafb1-host\") pod \"node-ca-7smcd\" (UID: \"8000c1f2-217e-480a-8f12-6eec342bafb1\") " pod="openshift-image-registry/node-ca-7smcd" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.691315 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/8000c1f2-217e-480a-8f12-6eec342bafb1-serviceca\") pod \"node-ca-7smcd\" (UID: \"8000c1f2-217e-480a-8f12-6eec342bafb1\") " pod="openshift-image-registry/node-ca-7smcd" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.693667 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.707197 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j8np\" (UniqueName: \"kubernetes.io/projected/8000c1f2-217e-480a-8f12-6eec342bafb1-kube-api-access-7j8np\") pod \"node-ca-7smcd\" (UID: \"8000c1f2-217e-480a-8f12-6eec342bafb1\") " pod="openshift-image-registry/node-ca-7smcd" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.709645 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.716081 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.716110 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.716119 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.716132 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.716141 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:56Z","lastTransitionTime":"2026-01-29T12:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.721750 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.733222 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.746261 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.760186 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.766278 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-7smcd" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.786004 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.818596 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.818649 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.818668 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.818693 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.818710 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:56Z","lastTransitionTime":"2026-01-29T12:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.921276 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.921301 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.921309 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.921324 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.921334 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:56Z","lastTransitionTime":"2026-01-29T12:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:56 crc kubenswrapper[4840]: I0129 12:04:56.961161 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:56:06.124273216 +0000 UTC Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.000348 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:04:57 crc kubenswrapper[4840]: E0129 12:04:57.000479 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.000957 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:57 crc kubenswrapper[4840]: E0129 12:04:57.001013 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.023887 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.023922 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.023931 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.023974 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.023993 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:57Z","lastTransitionTime":"2026-01-29T12:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.126096 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.126126 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.126136 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.126157 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.126169 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:57Z","lastTransitionTime":"2026-01-29T12:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.191817 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.193460 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7smcd" event={"ID":"8000c1f2-217e-480a-8f12-6eec342bafb1","Type":"ContainerStarted","Data":"e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.193498 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-7smcd" event={"ID":"8000c1f2-217e-480a-8f12-6eec342bafb1","Type":"ContainerStarted","Data":"38ea96f2ab584b08324305dba15cd60d124f35645c4b45712588d5ad0b0698cf"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.195659 4840 generic.go:334] "Generic (PLEG): container finished" podID="3a588a45-d664-486f-9135-b0184d00785a" containerID="af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93" exitCode=0 Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.195718 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" event={"ID":"3a588a45-d664-486f-9135-b0184d00785a","Type":"ContainerDied","Data":"af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.212415 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.228482 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.228519 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.228531 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.228550 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.228562 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:57Z","lastTransitionTime":"2026-01-29T12:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.233304 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.246485 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.257828 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.276659 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.290312 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.308806 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.321279 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.336734 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.336770 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.336781 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.336798 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.336810 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:57Z","lastTransitionTime":"2026-01-29T12:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.345278 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.361703 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.377517 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.390019 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.402697 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.413850 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.430731 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.443671 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.443720 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.443732 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.443750 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.443764 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:57Z","lastTransitionTime":"2026-01-29T12:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.446460 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.466235 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.481089 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.494675 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.506044 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.523608 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.539213 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.545937 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.545982 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.545991 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.546005 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.546015 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:57Z","lastTransitionTime":"2026-01-29T12:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.552988 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.567152 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.581136 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.597153 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.611807 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.630459 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.648979 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.649011 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.649023 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.649039 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.649050 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:57Z","lastTransitionTime":"2026-01-29T12:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.649151 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.670097 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:57Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.752618 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.752655 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.752666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.752685 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.752696 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:57Z","lastTransitionTime":"2026-01-29T12:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.855563 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.855610 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.855620 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.855633 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.855643 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:57Z","lastTransitionTime":"2026-01-29T12:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.958330 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.958541 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.958569 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.958585 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.958594 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:57Z","lastTransitionTime":"2026-01-29T12:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:57 crc kubenswrapper[4840]: I0129 12:04:57.962199 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 12:48:24.505307624 +0000 UTC Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.001067 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:04:58 crc kubenswrapper[4840]: E0129 12:04:58.001188 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.063397 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.063435 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.063443 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.063457 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.063465 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:58Z","lastTransitionTime":"2026-01-29T12:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.165887 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.165918 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.165927 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.166607 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.166629 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:58Z","lastTransitionTime":"2026-01-29T12:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.200047 4840 generic.go:334] "Generic (PLEG): container finished" podID="3a588a45-d664-486f-9135-b0184d00785a" containerID="ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a" exitCode=0 Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.200097 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" event={"ID":"3a588a45-d664-486f-9135-b0184d00785a","Type":"ContainerDied","Data":"ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a"} Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.215307 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.227663 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.243870 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.259256 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.269900 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.269961 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.269973 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.269990 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.270000 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:58Z","lastTransitionTime":"2026-01-29T12:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.271157 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.288126 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.304782 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.319236 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.332317 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.344781 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.357538 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.372883 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.373041 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.373192 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.373354 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.373583 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:58Z","lastTransitionTime":"2026-01-29T12:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.373061 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.394791 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.409413 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.428617 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:58Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.476805 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.476845 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.476854 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.476868 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.476880 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:58Z","lastTransitionTime":"2026-01-29T12:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.578920 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.578978 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.578989 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.579022 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.579032 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:58Z","lastTransitionTime":"2026-01-29T12:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.681106 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.681151 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.681160 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.681172 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.681181 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:58Z","lastTransitionTime":"2026-01-29T12:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.784403 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.784443 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.784454 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.784469 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.784482 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:58Z","lastTransitionTime":"2026-01-29T12:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.814105 4840 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.887528 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.887568 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.887579 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.887593 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.887604 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:58Z","lastTransitionTime":"2026-01-29T12:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.962746 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:25:35.567686841 +0000 UTC Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.990521 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.990567 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.990578 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.990595 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:58 crc kubenswrapper[4840]: I0129 12:04:58.990606 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:58Z","lastTransitionTime":"2026-01-29T12:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.000760 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.000808 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:04:59 crc kubenswrapper[4840]: E0129 12:04:59.000874 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:04:59 crc kubenswrapper[4840]: E0129 12:04:59.001018 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.015894 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.028735 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.040916 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.052045 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.064197 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.080047 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.093424 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.093462 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.093470 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.093483 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.093456 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.093494 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:59Z","lastTransitionTime":"2026-01-29T12:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.113908 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.125612 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.136548 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.148625 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.164268 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.185066 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.196576 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.196615 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.196623 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.196641 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.196650 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:59Z","lastTransitionTime":"2026-01-29T12:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.197415 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.209638 4840 generic.go:334] "Generic (PLEG): container finished" podID="3a588a45-d664-486f-9135-b0184d00785a" containerID="6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5" exitCode=0 Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.209683 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" event={"ID":"3a588a45-d664-486f-9135-b0184d00785a","Type":"ContainerDied","Data":"6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5"} Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.215600 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.229248 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.245770 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.259915 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.273764 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.293364 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.298571 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.298621 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.298633 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.298651 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.298664 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:59Z","lastTransitionTime":"2026-01-29T12:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.305107 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.324310 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.338097 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.356753 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.370145 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.383018 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.395613 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.401206 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.401258 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.401271 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.401288 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.401302 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:59Z","lastTransitionTime":"2026-01-29T12:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.407767 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.418858 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.434000 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:04:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.503769 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.503813 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.503823 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.503840 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.503851 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:59Z","lastTransitionTime":"2026-01-29T12:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.606250 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.606282 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.606291 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.606305 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.606316 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:59Z","lastTransitionTime":"2026-01-29T12:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.708469 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.708507 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.708519 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.708534 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.708545 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:59Z","lastTransitionTime":"2026-01-29T12:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.811818 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.811914 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.811928 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.812069 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.812088 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:59Z","lastTransitionTime":"2026-01-29T12:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.914492 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.914526 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.914534 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.914571 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.914584 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:04:59Z","lastTransitionTime":"2026-01-29T12:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:04:59 crc kubenswrapper[4840]: I0129 12:04:59.963209 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 19:15:41.792060029 +0000 UTC Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.000849 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:00 crc kubenswrapper[4840]: E0129 12:05:00.001031 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.016822 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.016850 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.016859 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.016873 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.016883 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:00Z","lastTransitionTime":"2026-01-29T12:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.119766 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.119806 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.119817 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.119836 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.119848 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:00Z","lastTransitionTime":"2026-01-29T12:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.217334 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" event={"ID":"3a588a45-d664-486f-9135-b0184d00785a","Type":"ContainerStarted","Data":"4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418"} Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.222794 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.222871 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.222883 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.222903 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.222932 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:00Z","lastTransitionTime":"2026-01-29T12:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.224244 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea"} Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.224622 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.235126 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.248482 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.261374 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.262505 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.294367 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.325540 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.325588 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.325599 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.325619 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.325631 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:00Z","lastTransitionTime":"2026-01-29T12:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.327172 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.364659 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.393889 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.411197 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.428156 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.428186 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.428196 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.428209 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.428219 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:00Z","lastTransitionTime":"2026-01-29T12:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.433041 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.447231 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.460018 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.479160 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.490574 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.500176 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.515470 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.530125 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.530174 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.530187 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.530204 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.530216 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:00Z","lastTransitionTime":"2026-01-29T12:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.541153 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.557881 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.581197 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.596640 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.611754 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.630731 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.635832 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.635881 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.635893 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.635909 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.635920 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:00Z","lastTransitionTime":"2026-01-29T12:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.646055 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.659716 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.676713 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.699310 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.714106 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.728596 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.739084 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.739123 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.739133 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.739152 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.739163 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:00Z","lastTransitionTime":"2026-01-29T12:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.745457 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.761582 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.776583 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:00Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.841124 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.841166 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.841175 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.841193 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.841211 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:00Z","lastTransitionTime":"2026-01-29T12:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.943522 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.943567 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.943575 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.943588 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.943597 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:00Z","lastTransitionTime":"2026-01-29T12:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:00 crc kubenswrapper[4840]: I0129 12:05:00.963626 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 05:35:59.16144259 +0000 UTC Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.001169 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:01 crc kubenswrapper[4840]: E0129 12:05:01.001299 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.001552 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:01 crc kubenswrapper[4840]: E0129 12:05:01.001717 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.046233 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.046277 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.046290 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.046312 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.046326 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:01Z","lastTransitionTime":"2026-01-29T12:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.154662 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.154708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.154719 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.154734 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.154746 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:01Z","lastTransitionTime":"2026-01-29T12:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.229424 4840 generic.go:334] "Generic (PLEG): container finished" podID="3a588a45-d664-486f-9135-b0184d00785a" containerID="4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418" exitCode=0 Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.229511 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" event={"ID":"3a588a45-d664-486f-9135-b0184d00785a","Type":"ContainerDied","Data":"4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418"} Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.229569 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.230094 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.255730 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.257215 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.257262 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.257275 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.257291 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.257302 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:01Z","lastTransitionTime":"2026-01-29T12:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.257469 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.271467 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.294564 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.308782 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.326500 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.341037 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.353930 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.360476 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.360522 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.360534 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.360553 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.360592 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:01Z","lastTransitionTime":"2026-01-29T12:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.364585 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.383825 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.399702 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.414452 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.425846 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.438798 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.452617 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.462384 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.462434 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.462448 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.462466 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.462476 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:01Z","lastTransitionTime":"2026-01-29T12:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.464642 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.477505 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.494561 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.507305 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.520251 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.533552 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.547316 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.564748 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.564787 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.564795 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.564811 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.564821 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:01Z","lastTransitionTime":"2026-01-29T12:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.568162 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.583121 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.602384 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.618193 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.632630 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.645503 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.658248 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.666492 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.666523 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.666531 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.666545 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.666553 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:01Z","lastTransitionTime":"2026-01-29T12:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.670242 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.684831 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:01Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.768536 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.768584 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.768600 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.768614 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.768623 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:01Z","lastTransitionTime":"2026-01-29T12:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.871424 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.871462 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.871475 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.871492 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.871502 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:01Z","lastTransitionTime":"2026-01-29T12:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.964377 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:28:04.455090501 +0000 UTC Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.974058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.974101 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.974113 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.974129 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:01 crc kubenswrapper[4840]: I0129 12:05:01.974147 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:01Z","lastTransitionTime":"2026-01-29T12:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.000407 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:02 crc kubenswrapper[4840]: E0129 12:05:02.002040 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.077348 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.077402 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.077415 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.077442 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.077457 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:02Z","lastTransitionTime":"2026-01-29T12:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.180128 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.180194 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.180209 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.180228 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.180241 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:02Z","lastTransitionTime":"2026-01-29T12:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.240129 4840 generic.go:334] "Generic (PLEG): container finished" podID="3a588a45-d664-486f-9135-b0184d00785a" containerID="e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c" exitCode=0 Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.240202 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" event={"ID":"3a588a45-d664-486f-9135-b0184d00785a","Type":"ContainerDied","Data":"e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c"} Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.240312 4840 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.263789 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.264755 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.278676 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.282371 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.282414 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.282427 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.282443 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.282454 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:02Z","lastTransitionTime":"2026-01-29T12:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.292495 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.306091 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.318210 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.332847 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.354699 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.376003 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.384824 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.384868 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.384879 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.384895 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.384907 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:02Z","lastTransitionTime":"2026-01-29T12:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.390930 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.408914 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.421504 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.432328 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.446644 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.457776 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.468114 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:02Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.487877 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.487915 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.487927 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.487957 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.487971 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:02Z","lastTransitionTime":"2026-01-29T12:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.590189 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.590232 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.590245 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.590260 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.590271 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:02Z","lastTransitionTime":"2026-01-29T12:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.692393 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.692423 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.692431 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.692446 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.692455 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:02Z","lastTransitionTime":"2026-01-29T12:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.794605 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.794634 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.794642 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.794664 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.794674 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:02Z","lastTransitionTime":"2026-01-29T12:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.897060 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.897099 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.897108 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.897127 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.897137 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:02Z","lastTransitionTime":"2026-01-29T12:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.964571 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 02:16:44.078151532 +0000 UTC Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.999261 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.999306 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.999318 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.999335 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:02 crc kubenswrapper[4840]: I0129 12:05:02.999347 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:02Z","lastTransitionTime":"2026-01-29T12:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.000358 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.000370 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:03 crc kubenswrapper[4840]: E0129 12:05:03.000473 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:03 crc kubenswrapper[4840]: E0129 12:05:03.000542 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.101385 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.101433 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.101442 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.101459 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.101469 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:03Z","lastTransitionTime":"2026-01-29T12:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.204228 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.204292 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.204312 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.204336 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.204354 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:03Z","lastTransitionTime":"2026-01-29T12:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.247567 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" event={"ID":"3a588a45-d664-486f-9135-b0184d00785a","Type":"ContainerStarted","Data":"45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f"} Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.270167 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.285807 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.306239 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.306289 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.306304 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.306353 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.306368 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:03Z","lastTransitionTime":"2026-01-29T12:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.306644 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.320787 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.337499 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.348828 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.360530 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.372076 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.387055 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.398702 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.408591 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.408631 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.408643 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.408659 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.408671 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:03Z","lastTransitionTime":"2026-01-29T12:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.413078 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.424589 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.438680 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.452851 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.467671 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.511226 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.511262 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.511270 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.511285 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.511294 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:03Z","lastTransitionTime":"2026-01-29T12:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.613613 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.613651 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.613661 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.613678 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.613687 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:03Z","lastTransitionTime":"2026-01-29T12:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.717147 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.717208 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.717219 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.717240 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.717252 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:03Z","lastTransitionTime":"2026-01-29T12:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.820464 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.820543 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.820553 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.820566 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.820577 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:03Z","lastTransitionTime":"2026-01-29T12:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.923225 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.923269 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.923279 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.923296 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.923306 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:03Z","lastTransitionTime":"2026-01-29T12:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.937004 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r"] Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.937521 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.939718 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.940053 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.954860 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.965093 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 23:59:51.328523764 +0000 UTC Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.969228 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.985739 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:03 crc kubenswrapper[4840]: I0129 12:05:03.998460 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:03Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.000664 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.000804 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.012602 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.025601 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.025637 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.025646 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.025659 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.025668 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:04Z","lastTransitionTime":"2026-01-29T12:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.025750 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.037674 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.053643 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.065854 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8fa77e68-c6e2-4fc7-bff9-8b350895e913-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lj65r\" (UID: \"8fa77e68-c6e2-4fc7-bff9-8b350895e913\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.065904 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8fa77e68-c6e2-4fc7-bff9-8b350895e913-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lj65r\" (UID: \"8fa77e68-c6e2-4fc7-bff9-8b350895e913\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.065940 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft9tb\" (UniqueName: \"kubernetes.io/projected/8fa77e68-c6e2-4fc7-bff9-8b350895e913-kube-api-access-ft9tb\") pod \"ovnkube-control-plane-749d76644c-lj65r\" (UID: \"8fa77e68-c6e2-4fc7-bff9-8b350895e913\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.066004 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8fa77e68-c6e2-4fc7-bff9-8b350895e913-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lj65r\" (UID: \"8fa77e68-c6e2-4fc7-bff9-8b350895e913\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.067316 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.080168 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.092311 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.105869 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.124065 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.127626 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.127674 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.127684 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.127700 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.127714 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:04Z","lastTransitionTime":"2026-01-29T12:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.150579 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.164029 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.166756 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft9tb\" (UniqueName: \"kubernetes.io/projected/8fa77e68-c6e2-4fc7-bff9-8b350895e913-kube-api-access-ft9tb\") pod \"ovnkube-control-plane-749d76644c-lj65r\" (UID: \"8fa77e68-c6e2-4fc7-bff9-8b350895e913\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.166800 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8fa77e68-c6e2-4fc7-bff9-8b350895e913-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lj65r\" (UID: \"8fa77e68-c6e2-4fc7-bff9-8b350895e913\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.166835 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8fa77e68-c6e2-4fc7-bff9-8b350895e913-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lj65r\" (UID: \"8fa77e68-c6e2-4fc7-bff9-8b350895e913\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.166892 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8fa77e68-c6e2-4fc7-bff9-8b350895e913-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lj65r\" (UID: \"8fa77e68-c6e2-4fc7-bff9-8b350895e913\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.167669 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8fa77e68-c6e2-4fc7-bff9-8b350895e913-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lj65r\" (UID: \"8fa77e68-c6e2-4fc7-bff9-8b350895e913\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.167688 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8fa77e68-c6e2-4fc7-bff9-8b350895e913-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lj65r\" (UID: \"8fa77e68-c6e2-4fc7-bff9-8b350895e913\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.171594 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8fa77e68-c6e2-4fc7-bff9-8b350895e913-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lj65r\" (UID: \"8fa77e68-c6e2-4fc7-bff9-8b350895e913\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.180450 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.187247 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft9tb\" (UniqueName: \"kubernetes.io/projected/8fa77e68-c6e2-4fc7-bff9-8b350895e913-kube-api-access-ft9tb\") pod \"ovnkube-control-plane-749d76644c-lj65r\" (UID: \"8fa77e68-c6e2-4fc7-bff9-8b350895e913\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.231775 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.232065 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.232130 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.232212 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.232276 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:04Z","lastTransitionTime":"2026-01-29T12:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.252512 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/0.log" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.254261 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.254560 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea" exitCode=1 Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.254595 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea"} Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.255201 4840 scope.go:117] "RemoveContainer" containerID="002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.269090 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: W0129 12:05:04.271410 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8fa77e68_c6e2_4fc7_bff9_8b350895e913.slice/crio-2adecf554cfdc353e504b78dc704b2cd0dfad9a2ff40d7f90fec13aa1d47005c WatchSource:0}: Error finding container 2adecf554cfdc353e504b78dc704b2cd0dfad9a2ff40d7f90fec13aa1d47005c: Status 404 returned error can't find the container with id 2adecf554cfdc353e504b78dc704b2cd0dfad9a2ff40d7f90fec13aa1d47005c Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.283471 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.295454 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.313441 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.328507 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.334376 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.334419 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.334433 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.334451 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.334462 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:04Z","lastTransitionTime":"2026-01-29T12:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.341770 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.352914 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.367835 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.368060 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:05:20.368036271 +0000 UTC m=+52.031016164 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.373997 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.390865 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.419256 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"71 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.724971 6071 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.723967 6071 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:03.725112 6071 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 12:05:03.725247 6071 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 12:05:03.725426 6071 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.725461 6071 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:03.726192 6071 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 12:05:03.726228 6071 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 12:05:03.726270 6071 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 12:05:03.726285 6071 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 12:05:03.726379 6071 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.435389 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.437359 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.437394 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.437403 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.437453 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.437465 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:04Z","lastTransitionTime":"2026-01-29T12:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.450599 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.463023 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.468779 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.468811 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.468838 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.468859 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.468984 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.469009 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.469027 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.469039 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.469078 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:20.469055793 +0000 UTC m=+52.132035676 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.469102 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:20.469094414 +0000 UTC m=+52.132074307 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.469106 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.469119 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.469126 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.469157 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:20.469140715 +0000 UTC m=+52.132120608 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.469196 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:05:04 crc kubenswrapper[4840]: E0129 12:05:04.469219 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:20.469211507 +0000 UTC m=+52.132191610 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.477889 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.491586 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.508623 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:04Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.539642 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.539694 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.539708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.539729 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.539744 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:04Z","lastTransitionTime":"2026-01-29T12:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.642375 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.642431 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.642445 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.642466 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.642481 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:04Z","lastTransitionTime":"2026-01-29T12:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.745381 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.745417 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.745427 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.745442 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.745452 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:04Z","lastTransitionTime":"2026-01-29T12:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.847444 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.847473 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.847482 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.847497 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.847510 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:04Z","lastTransitionTime":"2026-01-29T12:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.950091 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.950126 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.950136 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.950151 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.950163 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:04Z","lastTransitionTime":"2026-01-29T12:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:04 crc kubenswrapper[4840]: I0129 12:05:04.965403 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 18:21:19.036166749 +0000 UTC Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.000896 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.000973 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:05 crc kubenswrapper[4840]: E0129 12:05:05.001045 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:05 crc kubenswrapper[4840]: E0129 12:05:05.001188 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.052465 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.052501 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.052513 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.052529 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.052540 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.131600 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.131633 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.131641 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.131659 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.131670 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: E0129 12:05:05.158495 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.162148 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.162176 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.162184 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.162196 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.162204 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: E0129 12:05:05.175121 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.178497 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.178560 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.178571 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.178586 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.178596 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: E0129 12:05:05.193885 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.197717 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.197769 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.197781 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.197796 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.198097 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: E0129 12:05:05.209778 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.213387 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.213413 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.213420 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.213433 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.213443 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: E0129 12:05:05.224978 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: E0129 12:05:05.225094 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.226749 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.226782 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.226792 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.226806 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.226817 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.259648 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" event={"ID":"8fa77e68-c6e2-4fc7-bff9-8b350895e913","Type":"ContainerStarted","Data":"dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.259697 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" event={"ID":"8fa77e68-c6e2-4fc7-bff9-8b350895e913","Type":"ContainerStarted","Data":"78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.259707 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" event={"ID":"8fa77e68-c6e2-4fc7-bff9-8b350895e913","Type":"ContainerStarted","Data":"2adecf554cfdc353e504b78dc704b2cd0dfad9a2ff40d7f90fec13aa1d47005c"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.261825 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/0.log" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.264213 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.265257 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.278693 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.293300 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.315338 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.332666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.332708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.332717 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.332731 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.332740 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.333699 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.351921 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.365606 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.380214 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.385108 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-mnzvc"] Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.385600 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:05 crc kubenswrapper[4840]: E0129 12:05:05.385677 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.394983 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.409788 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.426835 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.435579 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.435615 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.435623 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.435640 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.435651 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.445009 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.471075 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.480106 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.480178 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4mp9\" (UniqueName: \"kubernetes.io/projected/c69a828d-5ed4-45ac-95a4-f0cc698d6992-kube-api-access-c4mp9\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.493030 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.513252 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"71 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.724971 6071 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.723967 6071 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:03.725112 6071 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 12:05:03.725247 6071 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 12:05:03.725426 6071 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.725461 6071 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:03.726192 6071 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 12:05:03.726228 6071 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 12:05:03.726270 6071 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 12:05:03.726285 6071 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 12:05:03.726379 6071 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.534006 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.538633 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.538672 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.538682 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.538697 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.538707 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.551143 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.565636 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.580867 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.581220 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4mp9\" (UniqueName: \"kubernetes.io/projected/c69a828d-5ed4-45ac-95a4-f0cc698d6992-kube-api-access-c4mp9\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:05 crc kubenswrapper[4840]: E0129 12:05:05.581017 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:05 crc kubenswrapper[4840]: E0129 12:05:05.581457 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs podName:c69a828d-5ed4-45ac-95a4-f0cc698d6992 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:06.081442643 +0000 UTC m=+37.744422536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs") pod "network-metrics-daemon-mnzvc" (UID: "c69a828d-5ed4-45ac-95a4-f0cc698d6992") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.584722 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"71 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.724971 6071 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.723967 6071 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:03.725112 6071 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 12:05:03.725247 6071 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 12:05:03.725426 6071 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.725461 6071 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:03.726192 6071 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 12:05:03.726228 6071 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 12:05:03.726270 6071 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 12:05:03.726285 6071 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 12:05:03.726379 6071 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.605462 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4mp9\" (UniqueName: \"kubernetes.io/projected/c69a828d-5ed4-45ac-95a4-f0cc698d6992-kube-api-access-c4mp9\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.626979 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.640935 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.640983 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.640991 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.641106 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.641167 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.654086 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.674464 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.686188 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.697403 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.713659 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.727553 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.738671 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.743403 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.743441 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.743450 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.743465 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.743476 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.753584 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.766861 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.779926 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.793386 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.807026 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.818353 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.832729 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:05Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.845926 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.845985 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.845996 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.846012 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.846022 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.948210 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.948257 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.948268 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.948284 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.948293 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:05Z","lastTransitionTime":"2026-01-29T12:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:05 crc kubenswrapper[4840]: I0129 12:05:05.966512 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 17:33:25.087312743 +0000 UTC Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.001277 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:06 crc kubenswrapper[4840]: E0129 12:05:06.001406 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.051246 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.051284 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.051295 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.051310 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.051319 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:06Z","lastTransitionTime":"2026-01-29T12:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.086549 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:06 crc kubenswrapper[4840]: E0129 12:05:06.086765 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:06 crc kubenswrapper[4840]: E0129 12:05:06.086999 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs podName:c69a828d-5ed4-45ac-95a4-f0cc698d6992 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:07.086979199 +0000 UTC m=+38.749959092 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs") pod "network-metrics-daemon-mnzvc" (UID: "c69a828d-5ed4-45ac-95a4-f0cc698d6992") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.153469 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.153545 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.153575 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.153593 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.153604 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:06Z","lastTransitionTime":"2026-01-29T12:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.256059 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.256103 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.256114 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.256127 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.256138 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:06Z","lastTransitionTime":"2026-01-29T12:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.269266 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/1.log" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.270131 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/0.log" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.272570 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18" exitCode=1 Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.272628 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18"} Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.272701 4840 scope.go:117] "RemoveContainer" containerID="002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.273417 4840 scope.go:117] "RemoveContainer" containerID="3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18" Jan 29 12:05:06 crc kubenswrapper[4840]: E0129 12:05:06.273626 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.293117 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.310201 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.330878 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"71 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.724971 6071 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.723967 6071 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:03.725112 6071 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 12:05:03.725247 6071 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 12:05:03.725426 6071 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.725461 6071 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:03.726192 6071 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 12:05:03.726228 6071 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 12:05:03.726270 6071 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 12:05:03.726285 6071 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 12:05:03.726379 6071 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.346543 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.358411 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.358448 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.358459 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.358473 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.358482 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:06Z","lastTransitionTime":"2026-01-29T12:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.359560 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.372486 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.383436 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.395102 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.407454 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.418555 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.432710 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.444601 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.455931 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.460492 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.460528 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.460538 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.460551 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.460560 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:06Z","lastTransitionTime":"2026-01-29T12:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.468451 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.480281 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.492828 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.503274 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.513443 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.526066 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.539360 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.552507 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.563058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.563096 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.563105 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.563122 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.563141 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:06Z","lastTransitionTime":"2026-01-29T12:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.565056 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.577858 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.590583 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.610241 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.622963 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.643437 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://002db0321c3d2182288f1c4c7edbaf54e696da6ece246c6830b4cbd33a4c4aea\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"message\\\":\\\"71 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.724971 6071 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.723967 6071 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:03.725112 6071 handler.go:208] Removed *v1.Node event handler 2\\\\nI0129 12:05:03.725247 6071 handler.go:208] Removed *v1.Node event handler 7\\\\nI0129 12:05:03.725426 6071 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:03.725461 6071 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:03.726192 6071 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0129 12:05:03.726228 6071 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0129 12:05:03.726270 6071 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0129 12:05:03.726285 6071 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0129 12:05:03.726379 6071 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"5.432458 6256 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0129 12:05:05.432480 6256 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432490 6256 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432499 6256 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.657523 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.665231 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.665258 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.665266 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.665279 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.665289 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:06Z","lastTransitionTime":"2026-01-29T12:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.672212 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.684140 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.696024 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.705279 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.750006 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.767154 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.767231 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.767247 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.767266 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.767279 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:06Z","lastTransitionTime":"2026-01-29T12:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.785662 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.869693 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.869732 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.869743 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.869762 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.869774 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:06Z","lastTransitionTime":"2026-01-29T12:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.967644 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:51:03.355118482 +0000 UTC Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.972532 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.972587 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.972597 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.972612 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:06 crc kubenswrapper[4840]: I0129 12:05:06.972622 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:06Z","lastTransitionTime":"2026-01-29T12:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.000540 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.000593 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:07 crc kubenswrapper[4840]: E0129 12:05:07.000689 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.000549 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:07 crc kubenswrapper[4840]: E0129 12:05:07.000776 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:07 crc kubenswrapper[4840]: E0129 12:05:07.000839 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.074325 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.074374 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.074385 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.074398 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.074407 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:07Z","lastTransitionTime":"2026-01-29T12:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.098723 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:07 crc kubenswrapper[4840]: E0129 12:05:07.098860 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:07 crc kubenswrapper[4840]: E0129 12:05:07.098911 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs podName:c69a828d-5ed4-45ac-95a4-f0cc698d6992 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:09.098898025 +0000 UTC m=+40.761877918 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs") pod "network-metrics-daemon-mnzvc" (UID: "c69a828d-5ed4-45ac-95a4-f0cc698d6992") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.176740 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.176806 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.176822 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.176841 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.176853 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:07Z","lastTransitionTime":"2026-01-29T12:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.279699 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.279737 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.279745 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.279758 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.279767 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:07Z","lastTransitionTime":"2026-01-29T12:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.280218 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/1.log" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.284005 4840 scope.go:117] "RemoveContainer" containerID="3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18" Jan 29 12:05:07 crc kubenswrapper[4840]: E0129 12:05:07.284336 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.299447 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.313591 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.324570 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.337674 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.349781 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.361118 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.375541 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.381816 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.381846 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.381856 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.381873 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.381884 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:07Z","lastTransitionTime":"2026-01-29T12:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.389262 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.401384 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.413463 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.423230 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.435429 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.451496 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.464147 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.472426 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.483993 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.484028 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.484039 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.484054 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.484065 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:07Z","lastTransitionTime":"2026-01-29T12:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.484885 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.499411 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.518508 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"5.432458 6256 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0129 12:05:05.432480 6256 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432490 6256 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432499 6256 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.535666 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.550079 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.587058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.587108 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.587122 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.587139 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.587152 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:07Z","lastTransitionTime":"2026-01-29T12:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.589389 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.630678 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.666972 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.689681 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.689726 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.689736 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.689753 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.689766 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:07Z","lastTransitionTime":"2026-01-29T12:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.709074 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.747034 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.793060 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.793133 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.793177 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.793205 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.793216 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:07Z","lastTransitionTime":"2026-01-29T12:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.796109 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.833107 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.873781 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"5.432458 6256 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0129 12:05:05.432480 6256 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432490 6256 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432499 6256 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.895186 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.895242 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.895252 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.895268 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.895278 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:07Z","lastTransitionTime":"2026-01-29T12:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.907993 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.952983 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.968663 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:17:36.082754037 +0000 UTC Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.994129 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:07Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.997706 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.997755 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.997771 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.997789 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:07 crc kubenswrapper[4840]: I0129 12:05:07.997802 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:07Z","lastTransitionTime":"2026-01-29T12:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.000764 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:08 crc kubenswrapper[4840]: E0129 12:05:08.000860 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.026924 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:08Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.066449 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:08Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.099733 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.099766 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.099773 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.099786 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.099795 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:08Z","lastTransitionTime":"2026-01-29T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.108638 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:08Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.148242 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:08Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.202451 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.202488 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.202497 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.202511 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.202520 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:08Z","lastTransitionTime":"2026-01-29T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.305295 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.305337 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.305348 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.305366 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.305378 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:08Z","lastTransitionTime":"2026-01-29T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.407725 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.408506 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.408521 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.408537 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.408552 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:08Z","lastTransitionTime":"2026-01-29T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.511153 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.511237 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.511251 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.511267 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.511276 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:08Z","lastTransitionTime":"2026-01-29T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.613131 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.613177 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.613188 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.613204 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.613216 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:08Z","lastTransitionTime":"2026-01-29T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.715035 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.715066 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.715075 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.715089 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.715097 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:08Z","lastTransitionTime":"2026-01-29T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.817398 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.817449 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.817464 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.817486 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.817502 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:08Z","lastTransitionTime":"2026-01-29T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.919870 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.919910 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.919920 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.919935 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.919965 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:08Z","lastTransitionTime":"2026-01-29T12:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:08 crc kubenswrapper[4840]: I0129 12:05:08.969540 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 23:05:28.944567793 +0000 UTC Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.001131 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:09 crc kubenswrapper[4840]: E0129 12:05:09.001269 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.001390 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:09 crc kubenswrapper[4840]: E0129 12:05:09.001484 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.001523 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:09 crc kubenswrapper[4840]: E0129 12:05:09.001589 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.016319 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.021615 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.021650 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.021658 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.021670 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.021679 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:09Z","lastTransitionTime":"2026-01-29T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.029376 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.042311 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.054609 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.064264 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.079977 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.092673 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.110252 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.116082 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:09 crc kubenswrapper[4840]: E0129 12:05:09.116212 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:09 crc kubenswrapper[4840]: E0129 12:05:09.116286 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs podName:c69a828d-5ed4-45ac-95a4-f0cc698d6992 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:13.116251898 +0000 UTC m=+44.779231791 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs") pod "network-metrics-daemon-mnzvc" (UID: "c69a828d-5ed4-45ac-95a4-f0cc698d6992") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.123134 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.123165 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.123180 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.123193 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.123201 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:09Z","lastTransitionTime":"2026-01-29T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.124295 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.136127 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.149390 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.161544 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.174841 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.185793 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.207320 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.223723 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.225385 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.225420 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.225432 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.225448 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.225462 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:09Z","lastTransitionTime":"2026-01-29T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.246839 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"5.432458 6256 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0129 12:05:05.432480 6256 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432490 6256 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432499 6256 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.327362 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.327401 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.327410 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.327425 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.327434 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:09Z","lastTransitionTime":"2026-01-29T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.429663 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.429701 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.429718 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.429735 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.429745 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:09Z","lastTransitionTime":"2026-01-29T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.532474 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.532523 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.532532 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.532546 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.532555 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:09Z","lastTransitionTime":"2026-01-29T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.634400 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.634458 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.634476 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.634517 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.634531 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:09Z","lastTransitionTime":"2026-01-29T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.736515 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.736558 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.736571 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.736588 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.736599 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:09Z","lastTransitionTime":"2026-01-29T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.838910 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.838962 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.838973 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.838988 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.838997 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:09Z","lastTransitionTime":"2026-01-29T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.941630 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.941667 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.941675 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.941691 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.941699 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:09Z","lastTransitionTime":"2026-01-29T12:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:09 crc kubenswrapper[4840]: I0129 12:05:09.970009 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 01:52:41.3349139 +0000 UTC Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.000537 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:10 crc kubenswrapper[4840]: E0129 12:05:10.000677 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.043969 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.044001 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.044011 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.044027 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.044036 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:10Z","lastTransitionTime":"2026-01-29T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.146402 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.146452 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.146460 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.146475 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.146483 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:10Z","lastTransitionTime":"2026-01-29T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.248548 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.248596 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.248609 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.248629 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.248642 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:10Z","lastTransitionTime":"2026-01-29T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.350884 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.350934 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.350961 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.350981 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.350992 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:10Z","lastTransitionTime":"2026-01-29T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.453285 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.453397 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.453413 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.453482 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.453505 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:10Z","lastTransitionTime":"2026-01-29T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.555832 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.555902 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.555916 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.555932 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.555965 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:10Z","lastTransitionTime":"2026-01-29T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.659091 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.659140 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.659151 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.659168 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.659180 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:10Z","lastTransitionTime":"2026-01-29T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.761925 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.761995 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.762009 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.762025 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.762037 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:10Z","lastTransitionTime":"2026-01-29T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.863875 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.863907 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.863915 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.863926 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.863935 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:10Z","lastTransitionTime":"2026-01-29T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.966110 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.966177 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.966194 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.966216 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.966236 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:10Z","lastTransitionTime":"2026-01-29T12:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:10 crc kubenswrapper[4840]: I0129 12:05:10.970511 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:33:30.909850492 +0000 UTC Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.001201 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.001263 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.001201 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:11 crc kubenswrapper[4840]: E0129 12:05:11.001326 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:11 crc kubenswrapper[4840]: E0129 12:05:11.001379 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:11 crc kubenswrapper[4840]: E0129 12:05:11.001454 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.069013 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.069060 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.069070 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.069089 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.069100 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:11Z","lastTransitionTime":"2026-01-29T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.171212 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.171253 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.171264 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.171283 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.171297 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:11Z","lastTransitionTime":"2026-01-29T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.272798 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.272842 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.272851 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.272865 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.272874 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:11Z","lastTransitionTime":"2026-01-29T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.375013 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.375070 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.375082 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.375098 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.375109 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:11Z","lastTransitionTime":"2026-01-29T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.478191 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.478229 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.478239 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.478252 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.478260 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:11Z","lastTransitionTime":"2026-01-29T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.581237 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.581300 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.581311 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.581341 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.581351 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:11Z","lastTransitionTime":"2026-01-29T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.683568 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.683623 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.683633 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.683647 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.683658 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:11Z","lastTransitionTime":"2026-01-29T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.787199 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.787319 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.787347 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.787379 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.787405 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:11Z","lastTransitionTime":"2026-01-29T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.889992 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.890038 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.890051 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.890069 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.890082 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:11Z","lastTransitionTime":"2026-01-29T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.971632 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 01:36:48.457717076 +0000 UTC Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.992969 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.993019 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.993034 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.993053 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:11 crc kubenswrapper[4840]: I0129 12:05:11.993084 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:11Z","lastTransitionTime":"2026-01-29T12:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.001131 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:12 crc kubenswrapper[4840]: E0129 12:05:12.001510 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.096360 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.096444 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.096463 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.096494 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.096514 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:12Z","lastTransitionTime":"2026-01-29T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.199761 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.199821 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.199835 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.199856 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.199871 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:12Z","lastTransitionTime":"2026-01-29T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.302150 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.302200 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.302211 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.302234 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.302248 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:12Z","lastTransitionTime":"2026-01-29T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.405548 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.405643 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.405675 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.405709 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.405735 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:12Z","lastTransitionTime":"2026-01-29T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.508923 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.509011 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.509028 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.509052 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.509067 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:12Z","lastTransitionTime":"2026-01-29T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.612154 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.612197 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.612207 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.612223 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.612233 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:12Z","lastTransitionTime":"2026-01-29T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.714772 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.714808 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.714819 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.714834 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.714843 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:12Z","lastTransitionTime":"2026-01-29T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.817488 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.817840 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.818155 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.818346 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.818478 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:12Z","lastTransitionTime":"2026-01-29T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.921609 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.921647 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.921655 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.921669 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.921679 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:12Z","lastTransitionTime":"2026-01-29T12:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:12 crc kubenswrapper[4840]: I0129 12:05:12.971970 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 05:52:22.68523707 +0000 UTC Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.000388 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.000505 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.000529 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:13 crc kubenswrapper[4840]: E0129 12:05:13.000632 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:13 crc kubenswrapper[4840]: E0129 12:05:13.000725 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:13 crc kubenswrapper[4840]: E0129 12:05:13.000792 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.023477 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.023511 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.023524 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.023543 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.023555 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:13Z","lastTransitionTime":"2026-01-29T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.125194 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.125232 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.125242 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.125257 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.125267 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:13Z","lastTransitionTime":"2026-01-29T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.158653 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:13 crc kubenswrapper[4840]: E0129 12:05:13.158820 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:13 crc kubenswrapper[4840]: E0129 12:05:13.158903 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs podName:c69a828d-5ed4-45ac-95a4-f0cc698d6992 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:21.158882492 +0000 UTC m=+52.821862385 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs") pod "network-metrics-daemon-mnzvc" (UID: "c69a828d-5ed4-45ac-95a4-f0cc698d6992") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.226961 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.227003 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.227013 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.227028 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.227039 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:13Z","lastTransitionTime":"2026-01-29T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.329118 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.329159 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.329171 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.329185 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.329195 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:13Z","lastTransitionTime":"2026-01-29T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.431932 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.431992 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.432002 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.432017 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.432027 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:13Z","lastTransitionTime":"2026-01-29T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.534085 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.534135 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.534148 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.534178 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.534187 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:13Z","lastTransitionTime":"2026-01-29T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.635820 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.635869 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.635885 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.635903 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.635916 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:13Z","lastTransitionTime":"2026-01-29T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.738254 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.738306 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.738315 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.738329 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.738339 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:13Z","lastTransitionTime":"2026-01-29T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.840464 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.840509 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.840551 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.840568 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.840579 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:13Z","lastTransitionTime":"2026-01-29T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.943297 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.943358 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.943376 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.943401 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.943420 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:13Z","lastTransitionTime":"2026-01-29T12:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:13 crc kubenswrapper[4840]: I0129 12:05:13.972908 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 14:44:16.928246061 +0000 UTC Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.001246 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:14 crc kubenswrapper[4840]: E0129 12:05:14.001396 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.045234 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.045266 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.045276 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.045288 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.045297 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:14Z","lastTransitionTime":"2026-01-29T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.147856 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.147899 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.147911 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.147928 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.147956 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:14Z","lastTransitionTime":"2026-01-29T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.251174 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.251228 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.251256 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.251277 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.251289 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:14Z","lastTransitionTime":"2026-01-29T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.354855 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.354909 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.354918 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.354934 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.354962 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:14Z","lastTransitionTime":"2026-01-29T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.457928 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.458024 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.458037 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.458058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.458071 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:14Z","lastTransitionTime":"2026-01-29T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.560685 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.560772 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.560796 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.560833 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.560861 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:14Z","lastTransitionTime":"2026-01-29T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.664273 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.664329 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.664340 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.664357 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.664371 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:14Z","lastTransitionTime":"2026-01-29T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.767816 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.767894 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.767914 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.767989 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.768013 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:14Z","lastTransitionTime":"2026-01-29T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.871572 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.871619 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.871632 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.871653 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.871665 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:14Z","lastTransitionTime":"2026-01-29T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.973286 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 18:24:35.653333036 +0000 UTC Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.974773 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.974819 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.974835 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.974855 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:14 crc kubenswrapper[4840]: I0129 12:05:14.974868 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:14Z","lastTransitionTime":"2026-01-29T12:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.001083 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.001117 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:15 crc kubenswrapper[4840]: E0129 12:05:15.001230 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.001258 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:15 crc kubenswrapper[4840]: E0129 12:05:15.001344 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:15 crc kubenswrapper[4840]: E0129 12:05:15.001468 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.078213 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.078279 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.078294 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.078316 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.078331 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.181891 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.181963 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.181981 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.182008 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.182023 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.285543 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.285612 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.285622 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.285642 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.285654 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.389241 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.389367 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.389389 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.389418 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.389436 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.401439 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.401516 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.401541 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.401575 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.401601 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: E0129 12:05:15.421315 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:15Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.427747 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.427799 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.427812 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.427834 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.427849 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: E0129 12:05:15.445211 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:15Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.450862 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.450909 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.450922 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.450959 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.450971 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: E0129 12:05:15.468380 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:15Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.473523 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.473601 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.473617 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.473641 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.473680 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: E0129 12:05:15.492111 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:15Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.497609 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.497677 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.497689 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.497708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.497742 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: E0129 12:05:15.513110 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:15Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:15 crc kubenswrapper[4840]: E0129 12:05:15.513351 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.515063 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.515117 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.515135 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.515151 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.515161 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.617405 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.617448 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.617461 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.617478 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.617493 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.720150 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.720191 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.720200 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.720231 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.720241 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.823168 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.823246 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.823267 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.823300 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.823319 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.927660 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.927749 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.927774 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.927811 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.927837 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:15Z","lastTransitionTime":"2026-01-29T12:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:15 crc kubenswrapper[4840]: I0129 12:05:15.973529 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 15:09:58.664586091 +0000 UTC Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.001268 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:16 crc kubenswrapper[4840]: E0129 12:05:16.001424 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.034065 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.034122 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.034136 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.034155 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.034167 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:16Z","lastTransitionTime":"2026-01-29T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.136410 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.136456 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.136469 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.136487 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.136498 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:16Z","lastTransitionTime":"2026-01-29T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.239741 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.239788 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.239830 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.239844 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.239854 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:16Z","lastTransitionTime":"2026-01-29T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.342900 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.342991 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.343001 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.343017 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.343028 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:16Z","lastTransitionTime":"2026-01-29T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.445255 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.445300 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.445312 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.445329 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.445343 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:16Z","lastTransitionTime":"2026-01-29T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.547792 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.547851 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.547863 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.547881 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.547893 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:16Z","lastTransitionTime":"2026-01-29T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.650130 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.650204 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.650220 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.650259 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.650271 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:16Z","lastTransitionTime":"2026-01-29T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.752568 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.752611 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.752623 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.752637 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.752648 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:16Z","lastTransitionTime":"2026-01-29T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.855006 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.855049 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.855059 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.855080 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.855091 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:16Z","lastTransitionTime":"2026-01-29T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.958464 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.958514 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.958526 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.958544 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.958557 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:16Z","lastTransitionTime":"2026-01-29T12:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:16 crc kubenswrapper[4840]: I0129 12:05:16.973768 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 19:20:10.390666835 +0000 UTC Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.001296 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.001344 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:17 crc kubenswrapper[4840]: E0129 12:05:17.001434 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:17 crc kubenswrapper[4840]: E0129 12:05:17.001650 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.001771 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:17 crc kubenswrapper[4840]: E0129 12:05:17.001855 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.062411 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.062478 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.062492 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.062515 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.062532 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:17Z","lastTransitionTime":"2026-01-29T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.166064 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.166114 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.166127 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.166150 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.166163 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:17Z","lastTransitionTime":"2026-01-29T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.268880 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.268936 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.268981 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.269006 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.269021 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:17Z","lastTransitionTime":"2026-01-29T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.370815 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.370873 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.370898 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.370921 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.370936 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:17Z","lastTransitionTime":"2026-01-29T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.473223 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.473285 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.473294 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.473309 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.473318 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:17Z","lastTransitionTime":"2026-01-29T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.575636 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.575686 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.575702 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.575726 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.575740 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:17Z","lastTransitionTime":"2026-01-29T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.678046 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.678099 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.678114 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.678134 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.678147 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:17Z","lastTransitionTime":"2026-01-29T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.780662 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.780748 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.780765 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.780785 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.780799 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:17Z","lastTransitionTime":"2026-01-29T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.883816 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.883890 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.883904 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.883922 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.883935 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:17Z","lastTransitionTime":"2026-01-29T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.973884 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 01:25:55.757678548 +0000 UTC Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.986611 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.986666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.986686 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.986707 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:17 crc kubenswrapper[4840]: I0129 12:05:17.986722 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:17Z","lastTransitionTime":"2026-01-29T12:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.000252 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:18 crc kubenswrapper[4840]: E0129 12:05:18.000406 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.089698 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.089737 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.089749 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.089766 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.089778 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:18Z","lastTransitionTime":"2026-01-29T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.191548 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.191576 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.191735 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.191753 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.191765 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:18Z","lastTransitionTime":"2026-01-29T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.294326 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.294362 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.294370 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.294383 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.294393 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:18Z","lastTransitionTime":"2026-01-29T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.396061 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.396101 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.396113 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.396128 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.396140 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:18Z","lastTransitionTime":"2026-01-29T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.498846 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.498877 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.498886 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.498899 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.498908 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:18Z","lastTransitionTime":"2026-01-29T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.601227 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.601264 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.601276 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.601292 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.601303 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:18Z","lastTransitionTime":"2026-01-29T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.703315 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.703355 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.703367 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.703383 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.703392 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:18Z","lastTransitionTime":"2026-01-29T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.805870 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.805914 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.805927 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.805969 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.805983 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:18Z","lastTransitionTime":"2026-01-29T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.908479 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.908518 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.908555 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.908572 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.908583 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:18Z","lastTransitionTime":"2026-01-29T12:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:18 crc kubenswrapper[4840]: I0129 12:05:18.974823 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:30:34.580874331 +0000 UTC Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.000389 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.000428 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.000394 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:19 crc kubenswrapper[4840]: E0129 12:05:19.000524 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:19 crc kubenswrapper[4840]: E0129 12:05:19.000596 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:19 crc kubenswrapper[4840]: E0129 12:05:19.000702 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.011476 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.011506 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.011515 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.011529 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.011539 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:19Z","lastTransitionTime":"2026-01-29T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.017120 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.029973 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.041812 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.058766 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.071987 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.082651 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.095688 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.108682 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.112761 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.112790 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.112800 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.112813 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.112821 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:19Z","lastTransitionTime":"2026-01-29T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.120913 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.133157 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.146259 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.158434 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.198214 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.214772 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.214831 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.214846 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.214868 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.214880 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:19Z","lastTransitionTime":"2026-01-29T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.224006 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.250005 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"5.432458 6256 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0129 12:05:05.432480 6256 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432490 6256 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432499 6256 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.263289 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.277890 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:19Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.317405 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.317430 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.317438 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.317451 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.317462 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:19Z","lastTransitionTime":"2026-01-29T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.420275 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.420314 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.420322 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.420336 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.420345 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:19Z","lastTransitionTime":"2026-01-29T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.523071 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.523114 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.523124 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.523140 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.523155 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:19Z","lastTransitionTime":"2026-01-29T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.625295 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.625326 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.625335 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.625348 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.625356 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:19Z","lastTransitionTime":"2026-01-29T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.728089 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.728332 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.728408 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.728479 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.728545 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:19Z","lastTransitionTime":"2026-01-29T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.831069 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.831107 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.831116 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.831130 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.831139 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:19Z","lastTransitionTime":"2026-01-29T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.933520 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.933778 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.933885 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.934053 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.934251 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:19Z","lastTransitionTime":"2026-01-29T12:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:19 crc kubenswrapper[4840]: I0129 12:05:19.975854 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 23:55:35.788674032 +0000 UTC Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.000288 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.000417 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.036967 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.037336 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.037716 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.037916 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.038135 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:20Z","lastTransitionTime":"2026-01-29T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.140998 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.141062 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.141075 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.141095 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.141107 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:20Z","lastTransitionTime":"2026-01-29T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.244418 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.244499 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.244516 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.244538 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.244552 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:20Z","lastTransitionTime":"2026-01-29T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.346886 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.346936 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.346964 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.346983 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.346997 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:20Z","lastTransitionTime":"2026-01-29T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.449493 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.449539 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.449552 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.449569 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.449581 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:20Z","lastTransitionTime":"2026-01-29T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.457215 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.457315 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:05:52.457293713 +0000 UTC m=+84.120273616 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.553562 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.553617 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.553627 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.553647 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.553660 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:20Z","lastTransitionTime":"2026-01-29T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.558262 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.558341 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.558390 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.558395 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.558423 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.558489 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:52.558462438 +0000 UTC m=+84.221442501 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.558514 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.558521 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.558542 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.558557 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.558558 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:52.55854578 +0000 UTC m=+84.221525833 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.558604 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:52.558593121 +0000 UTC m=+84.221573194 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.558626 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.558654 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.558669 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:05:20 crc kubenswrapper[4840]: E0129 12:05:20.558744 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:52.558719955 +0000 UTC m=+84.221700018 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.657568 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.657634 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.657654 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.657686 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.657706 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:20Z","lastTransitionTime":"2026-01-29T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.761596 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.761675 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.761693 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.761739 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.761756 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:20Z","lastTransitionTime":"2026-01-29T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.863959 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.864002 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.864012 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.864046 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.864059 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:20Z","lastTransitionTime":"2026-01-29T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.966872 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.966904 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.966914 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.966928 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.966938 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:20Z","lastTransitionTime":"2026-01-29T12:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:20 crc kubenswrapper[4840]: I0129 12:05:20.976243 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 23:07:00.521547737 +0000 UTC Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.000841 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.000887 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.000915 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:21 crc kubenswrapper[4840]: E0129 12:05:21.001007 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:21 crc kubenswrapper[4840]: E0129 12:05:21.001068 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:21 crc kubenswrapper[4840]: E0129 12:05:21.001150 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.069667 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.069711 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.069722 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.069738 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.069749 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:21Z","lastTransitionTime":"2026-01-29T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.163413 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:21 crc kubenswrapper[4840]: E0129 12:05:21.163575 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:21 crc kubenswrapper[4840]: E0129 12:05:21.163660 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs podName:c69a828d-5ed4-45ac-95a4-f0cc698d6992 nodeName:}" failed. No retries permitted until 2026-01-29 12:05:37.163638877 +0000 UTC m=+68.826618960 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs") pod "network-metrics-daemon-mnzvc" (UID: "c69a828d-5ed4-45ac-95a4-f0cc698d6992") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.172198 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.172379 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.172512 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.172593 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.172671 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:21Z","lastTransitionTime":"2026-01-29T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.275324 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.275550 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.275611 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.275708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.275787 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:21Z","lastTransitionTime":"2026-01-29T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.378500 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.378804 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.378893 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.379005 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.379108 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:21Z","lastTransitionTime":"2026-01-29T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.481814 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.482073 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.482159 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.482230 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.482288 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:21Z","lastTransitionTime":"2026-01-29T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.585326 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.585361 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.585370 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.585386 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.585396 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:21Z","lastTransitionTime":"2026-01-29T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.687580 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.687620 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.687629 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.687644 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.687653 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:21Z","lastTransitionTime":"2026-01-29T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.790228 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.790265 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.790275 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.790291 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.790302 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:21Z","lastTransitionTime":"2026-01-29T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.892905 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.892973 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.892983 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.893003 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.893014 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:21Z","lastTransitionTime":"2026-01-29T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.976314 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 13:22:43.246786508 +0000 UTC Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.995750 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.995802 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.995814 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.995836 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:21 crc kubenswrapper[4840]: I0129 12:05:21.995854 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:21Z","lastTransitionTime":"2026-01-29T12:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.001024 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:22 crc kubenswrapper[4840]: E0129 12:05:22.001516 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.001658 4840 scope.go:117] "RemoveContainer" containerID="3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.098361 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.098405 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.098458 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.098479 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.098964 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:22Z","lastTransitionTime":"2026-01-29T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.201386 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.201427 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.201440 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.201458 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.201470 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:22Z","lastTransitionTime":"2026-01-29T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.303536 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.303578 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.303587 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.303601 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.303609 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:22Z","lastTransitionTime":"2026-01-29T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.333300 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/1.log" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.335661 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2"} Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.336594 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.355624 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.369220 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.381627 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.392986 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.405784 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.405817 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.405828 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.405843 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.405855 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:22Z","lastTransitionTime":"2026-01-29T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.406497 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.426333 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.438540 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.450044 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.463437 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.476158 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.492723 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.508623 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.508653 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.508662 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.508677 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.508687 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:22Z","lastTransitionTime":"2026-01-29T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.510504 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.531142 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.544268 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.569273 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.585475 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.606741 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"5.432458 6256 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0129 12:05:05.432480 6256 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432490 6256 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432499 6256 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:22Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.611095 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.611126 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.611135 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.611149 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.611158 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:22Z","lastTransitionTime":"2026-01-29T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.713154 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.713192 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.713202 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.713217 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.713226 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:22Z","lastTransitionTime":"2026-01-29T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.815106 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.815139 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.815148 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.815161 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.815170 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:22Z","lastTransitionTime":"2026-01-29T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.917693 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.917736 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.917746 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.917763 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.917774 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:22Z","lastTransitionTime":"2026-01-29T12:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:22 crc kubenswrapper[4840]: I0129 12:05:22.976976 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:12:17.863665642 +0000 UTC Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.000624 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.000687 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:23 crc kubenswrapper[4840]: E0129 12:05:23.000749 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:23 crc kubenswrapper[4840]: E0129 12:05:23.000809 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.000848 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:23 crc kubenswrapper[4840]: E0129 12:05:23.000927 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.020183 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.020224 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.020232 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.020248 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.020257 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:23Z","lastTransitionTime":"2026-01-29T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.121809 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.121859 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.121875 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.121896 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.121914 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:23Z","lastTransitionTime":"2026-01-29T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.224009 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.224038 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.224046 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.224059 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.224067 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:23Z","lastTransitionTime":"2026-01-29T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.325786 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.325855 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.325870 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.325891 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.325906 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:23Z","lastTransitionTime":"2026-01-29T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.340525 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/2.log" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.341206 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/1.log" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.343321 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2" exitCode=1 Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.343367 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2"} Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.343405 4840 scope.go:117] "RemoveContainer" containerID="3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.344284 4840 scope.go:117] "RemoveContainer" containerID="4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2" Jan 29 12:05:23 crc kubenswrapper[4840]: E0129 12:05:23.344423 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.360657 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.385130 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"5.432458 6256 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0129 12:05:05.432480 6256 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432490 6256 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432499 6256 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:23Z\\\",\\\"message\\\":\\\"ss event on pod openshift-etcd/etcd-crc\\\\nI0129 12:05:23.031599 6482 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0129 12:05:23.031847 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0129 12:05:23.031848 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:05:23.031854 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 12:05:23.031743 6482 obj_retry.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.407328 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.420064 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.428009 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.428054 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.428070 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.428088 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.428101 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:23Z","lastTransitionTime":"2026-01-29T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.432605 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.443891 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.454104 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.471463 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.483640 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.495610 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.509662 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.521768 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.530835 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.530883 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.530895 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.530912 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.530924 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:23Z","lastTransitionTime":"2026-01-29T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.533977 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.548761 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.561118 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.571727 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.583125 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.633365 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.633422 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.633435 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.633448 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.633456 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:23Z","lastTransitionTime":"2026-01-29T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.735683 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.735725 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.735737 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.735753 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.735762 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:23Z","lastTransitionTime":"2026-01-29T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.837590 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.837631 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.837642 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.837657 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.837666 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:23Z","lastTransitionTime":"2026-01-29T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.939930 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.939979 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.939990 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.940046 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.940056 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:23Z","lastTransitionTime":"2026-01-29T12:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:23 crc kubenswrapper[4840]: I0129 12:05:23.977174 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 14:45:46.463465855 +0000 UTC Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.001088 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:24 crc kubenswrapper[4840]: E0129 12:05:24.001288 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.043303 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.043349 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.043361 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.043379 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.043391 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:24Z","lastTransitionTime":"2026-01-29T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.145873 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.145910 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.145921 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.145966 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.145982 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:24Z","lastTransitionTime":"2026-01-29T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.241692 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.250214 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.250257 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.250266 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.250280 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.250290 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:24Z","lastTransitionTime":"2026-01-29T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.256434 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.266266 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.282978 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.298546 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.314156 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.331836 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.349354 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/2.log" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.351143 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.352354 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.352630 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.352651 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.352671 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.352684 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:24Z","lastTransitionTime":"2026-01-29T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.353539 4840 scope.go:117] "RemoveContainer" containerID="4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2" Jan 29 12:05:24 crc kubenswrapper[4840]: E0129 12:05:24.353679 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.368184 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.390062 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.407833 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.427631 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3b239c2dd4a973664fcae9e7af4c27ffa979d72258d8726dd4b740197e689c18\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"5.432458 6256 services_controller.go:451] Built service openshift-kube-apiserver-operator/metrics cluster-wide LB for network=default: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-kube-apiserver-operator/metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-kube-apiserver-operator/metrics\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.109\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0129 12:05:05.432480 6256 services_controller.go:452] Built service openshift-kube-apiserver-operator/metrics per-node LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432490 6256 services_controller.go:453] Built service openshift-kube-apiserver-operator/metrics template LB for network=default: []services.LB{}\\\\nI0129 12:05:05.432499 6256 services_controller.go:454] Service openshift-kube-apiserver-operator/metrics for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nI0129 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:23Z\\\",\\\"message\\\":\\\"ss event on pod openshift-etcd/etcd-crc\\\\nI0129 12:05:23.031599 6482 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0129 12:05:23.031847 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0129 12:05:23.031848 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:05:23.031854 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 12:05:23.031743 6482 obj_retry.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.442357 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.460071 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.460143 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.460160 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.460187 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.460219 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:24Z","lastTransitionTime":"2026-01-29T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.461373 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.473176 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.484885 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.496908 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.508277 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.522326 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.534193 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.545064 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e32df1-e061-4ac1-a318-f30f1b932753\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c25958324e3f49f3ef2b61f0779122b937c6ffbebb33cd4dd408367a72469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cafbe8e32846faab26909f99cf3a15e6f463468d5f2059e192973f549bb0c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a633ce55ec465156590cd31b8f4f646588898101a5fa52960276e00e63e73c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.563896 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.564783 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.564824 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.564836 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.564853 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.564863 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:24Z","lastTransitionTime":"2026-01-29T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.577598 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.591049 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.603734 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.615763 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.626712 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.640574 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.661709 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:23Z\\\",\\\"message\\\":\\\"ss event on pod openshift-etcd/etcd-crc\\\\nI0129 12:05:23.031599 6482 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0129 12:05:23.031847 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0129 12:05:23.031848 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:05:23.031854 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 12:05:23.031743 6482 obj_retry.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.667192 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.667257 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.667270 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.667289 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.667302 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:24Z","lastTransitionTime":"2026-01-29T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.685766 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.700555 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.715356 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.729004 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.742577 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.761527 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.770530 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.770909 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.771252 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.771479 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.771685 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:24Z","lastTransitionTime":"2026-01-29T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.777417 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.796171 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:24Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.875420 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.875464 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.875476 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.875495 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.875507 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:24Z","lastTransitionTime":"2026-01-29T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.977809 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 10:14:42.70161619 +0000 UTC Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.978763 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.978801 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.978815 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.978835 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:24 crc kubenswrapper[4840]: I0129 12:05:24.978845 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:24Z","lastTransitionTime":"2026-01-29T12:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.000429 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.000463 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.000485 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:25 crc kubenswrapper[4840]: E0129 12:05:25.001266 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:25 crc kubenswrapper[4840]: E0129 12:05:25.001422 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:25 crc kubenswrapper[4840]: E0129 12:05:25.001656 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.081584 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.081639 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.081653 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.081674 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.081691 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.184528 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.184589 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.184604 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.184631 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.184646 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.287311 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.287376 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.287388 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.287410 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.287424 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.390773 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.390842 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.390860 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.390884 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.390902 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.493250 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.493289 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.493299 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.493315 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.493327 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.597644 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.597706 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.597718 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.597736 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.597748 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.670615 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.670702 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.670729 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.670756 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.670777 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: E0129 12:05:25.691860 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.697304 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.697346 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.697358 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.697376 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.697389 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: E0129 12:05:25.723266 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.728050 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.728094 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.728108 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.728128 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.728141 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: E0129 12:05:25.743525 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.748976 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.749043 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.749059 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.749080 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.749095 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: E0129 12:05:25.765978 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.770421 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.770634 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.770652 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.770683 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.770701 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: E0129 12:05:25.784654 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:25Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:25 crc kubenswrapper[4840]: E0129 12:05:25.785024 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.786904 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.786992 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.787007 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.787028 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.787042 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.890245 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.890292 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.890311 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.890333 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.890351 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.978247 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 06:27:58.97247534 +0000 UTC Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.993822 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.993869 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.993936 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.994223 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:25 crc kubenswrapper[4840]: I0129 12:05:25.994238 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:25Z","lastTransitionTime":"2026-01-29T12:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.001129 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:26 crc kubenswrapper[4840]: E0129 12:05:26.001248 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.097392 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.097447 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.097459 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.097481 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.097497 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:26Z","lastTransitionTime":"2026-01-29T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.201243 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.201313 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.201331 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.201355 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.201376 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:26Z","lastTransitionTime":"2026-01-29T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.304538 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.305008 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.305079 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.305159 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.305243 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:26Z","lastTransitionTime":"2026-01-29T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.407540 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.407584 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.407598 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.407617 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.407629 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:26Z","lastTransitionTime":"2026-01-29T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.509683 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.509732 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.509743 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.509761 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.509773 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:26Z","lastTransitionTime":"2026-01-29T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.612253 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.612290 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.612299 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.612314 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.612323 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:26Z","lastTransitionTime":"2026-01-29T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.714514 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.714625 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.714645 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.714666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.714680 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:26Z","lastTransitionTime":"2026-01-29T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.817075 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.817138 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.817153 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.817172 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.817186 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:26Z","lastTransitionTime":"2026-01-29T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.920820 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.920876 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.920893 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.920918 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.920930 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:26Z","lastTransitionTime":"2026-01-29T12:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:26 crc kubenswrapper[4840]: I0129 12:05:26.979688 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 08:11:31.714929498 +0000 UTC Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.001244 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:27 crc kubenswrapper[4840]: E0129 12:05:27.001381 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.001588 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:27 crc kubenswrapper[4840]: E0129 12:05:27.001646 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.001891 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:27 crc kubenswrapper[4840]: E0129 12:05:27.002096 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.023330 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.023374 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.023382 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.023399 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.023409 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:27Z","lastTransitionTime":"2026-01-29T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.125642 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.125677 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.125694 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.125713 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.125726 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:27Z","lastTransitionTime":"2026-01-29T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.227739 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.227772 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.227781 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.227796 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.227805 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:27Z","lastTransitionTime":"2026-01-29T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.330211 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.330247 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.330257 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.330275 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.330294 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:27Z","lastTransitionTime":"2026-01-29T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.432455 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.432503 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.432514 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.432537 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.432550 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:27Z","lastTransitionTime":"2026-01-29T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.534742 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.534784 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.534793 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.534807 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.534818 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:27Z","lastTransitionTime":"2026-01-29T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.636894 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.636923 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.636957 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.636971 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.636979 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:27Z","lastTransitionTime":"2026-01-29T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.739349 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.739396 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.739409 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.739427 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.739440 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:27Z","lastTransitionTime":"2026-01-29T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.841915 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.841980 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.841993 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.842012 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.842023 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:27Z","lastTransitionTime":"2026-01-29T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.943866 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.943974 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.943991 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.944009 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.944019 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:27Z","lastTransitionTime":"2026-01-29T12:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:27 crc kubenswrapper[4840]: I0129 12:05:27.980219 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:15:35.876848332 +0000 UTC Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.000585 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:28 crc kubenswrapper[4840]: E0129 12:05:28.000739 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.046725 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.046767 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.046777 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.046792 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.046802 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:28Z","lastTransitionTime":"2026-01-29T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.148793 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.148826 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.148835 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.148849 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.148858 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:28Z","lastTransitionTime":"2026-01-29T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.251506 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.251553 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.251565 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.251586 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.251599 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:28Z","lastTransitionTime":"2026-01-29T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.354026 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.354083 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.354096 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.354117 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.354134 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:28Z","lastTransitionTime":"2026-01-29T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.456391 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.456442 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.456451 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.456466 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.456475 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:28Z","lastTransitionTime":"2026-01-29T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.558742 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.558794 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.558814 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.558833 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.558852 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:28Z","lastTransitionTime":"2026-01-29T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.660836 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.660892 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.660903 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.660917 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.660925 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:28Z","lastTransitionTime":"2026-01-29T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.764164 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.764236 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.764250 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.764279 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.764297 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:28Z","lastTransitionTime":"2026-01-29T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.867187 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.867247 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.867255 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.867269 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.867278 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:28Z","lastTransitionTime":"2026-01-29T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.971239 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.971283 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.971296 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.971314 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.971324 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:28Z","lastTransitionTime":"2026-01-29T12:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:28 crc kubenswrapper[4840]: I0129 12:05:28.981144 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:08:37.018077557 +0000 UTC Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.000988 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.001041 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.001041 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:29 crc kubenswrapper[4840]: E0129 12:05:29.001129 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:29 crc kubenswrapper[4840]: E0129 12:05:29.001272 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:29 crc kubenswrapper[4840]: E0129 12:05:29.001367 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.019684 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.033968 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.045511 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.056561 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.067082 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.074695 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.074810 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.074833 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.074859 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.074882 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:29Z","lastTransitionTime":"2026-01-29T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.083375 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.096906 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.111663 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.130853 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.141672 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.153218 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.165038 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e32df1-e061-4ac1-a318-f30f1b932753\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c25958324e3f49f3ef2b61f0779122b937c6ffbebb33cd4dd408367a72469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cafbe8e32846faab26909f99cf3a15e6f463468d5f2059e192973f549bb0c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a633ce55ec465156590cd31b8f4f646588898101a5fa52960276e00e63e73c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.177225 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.179843 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.179882 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.179894 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.179913 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.179929 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:29Z","lastTransitionTime":"2026-01-29T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.197843 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.211152 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.229098 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:23Z\\\",\\\"message\\\":\\\"ss event on pod openshift-etcd/etcd-crc\\\\nI0129 12:05:23.031599 6482 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0129 12:05:23.031847 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0129 12:05:23.031848 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:05:23.031854 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 12:05:23.031743 6482 obj_retry.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.243180 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.257364 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:29Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.282920 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.282971 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.282980 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.282996 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.283005 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:29Z","lastTransitionTime":"2026-01-29T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.385885 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.385927 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.385978 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.386001 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.386011 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:29Z","lastTransitionTime":"2026-01-29T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.489239 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.489489 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.489497 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.489511 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.489520 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:29Z","lastTransitionTime":"2026-01-29T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.591216 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.591280 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.591288 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.591303 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.591313 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:29Z","lastTransitionTime":"2026-01-29T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.694557 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.694601 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.694615 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.694636 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.694647 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:29Z","lastTransitionTime":"2026-01-29T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.796836 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.796869 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.796877 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.796890 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.796899 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:29Z","lastTransitionTime":"2026-01-29T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.899034 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.899077 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.899111 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.899128 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.899143 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:29Z","lastTransitionTime":"2026-01-29T12:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:29 crc kubenswrapper[4840]: I0129 12:05:29.982105 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 21:47:42.280165873 +0000 UTC Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.000412 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:30 crc kubenswrapper[4840]: E0129 12:05:30.000549 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.004020 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.004066 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.004076 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.004092 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.004102 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:30Z","lastTransitionTime":"2026-01-29T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.106818 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.106860 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.106870 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.106887 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.106922 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:30Z","lastTransitionTime":"2026-01-29T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.209833 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.209904 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.209919 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.209972 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.209996 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:30Z","lastTransitionTime":"2026-01-29T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.312201 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.312245 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.312257 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.312275 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.312286 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:30Z","lastTransitionTime":"2026-01-29T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.414284 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.414314 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.414322 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.414335 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.414344 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:30Z","lastTransitionTime":"2026-01-29T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.517161 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.517484 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.517615 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.517727 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.517823 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:30Z","lastTransitionTime":"2026-01-29T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.619657 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.619968 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.620035 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.620118 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.620488 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:30Z","lastTransitionTime":"2026-01-29T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.727125 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.727166 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.727178 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.727197 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.727209 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:30Z","lastTransitionTime":"2026-01-29T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.829131 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.829191 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.829206 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.829228 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.829243 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:30Z","lastTransitionTime":"2026-01-29T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.931240 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.931285 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.931299 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.931316 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.931328 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:30Z","lastTransitionTime":"2026-01-29T12:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:30 crc kubenswrapper[4840]: I0129 12:05:30.982526 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 08:24:03.132409552 +0000 UTC Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.000900 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.000985 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:31 crc kubenswrapper[4840]: E0129 12:05:31.001056 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:31 crc kubenswrapper[4840]: E0129 12:05:31.001110 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.001192 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:31 crc kubenswrapper[4840]: E0129 12:05:31.001334 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.033142 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.033179 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.033188 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.033201 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.033210 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:31Z","lastTransitionTime":"2026-01-29T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.135728 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.135770 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.135781 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.135797 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.135808 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:31Z","lastTransitionTime":"2026-01-29T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.237912 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.237960 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.237969 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.237983 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.238013 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:31Z","lastTransitionTime":"2026-01-29T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.340171 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.340198 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.340206 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.340221 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.340229 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:31Z","lastTransitionTime":"2026-01-29T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.441827 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.441859 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.441866 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.441879 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.441889 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:31Z","lastTransitionTime":"2026-01-29T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.544222 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.544276 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.544290 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.544312 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.544328 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:31Z","lastTransitionTime":"2026-01-29T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.646446 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.646494 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.646506 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.646523 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.646535 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:31Z","lastTransitionTime":"2026-01-29T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.748596 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.748634 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.748653 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.748668 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.748679 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:31Z","lastTransitionTime":"2026-01-29T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.851477 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.851516 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.851526 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.851540 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.851550 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:31Z","lastTransitionTime":"2026-01-29T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.954597 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.954646 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.954657 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.954677 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.954690 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:31Z","lastTransitionTime":"2026-01-29T12:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:31 crc kubenswrapper[4840]: I0129 12:05:31.983260 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 20:56:34.640851758 +0000 UTC Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.000572 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:32 crc kubenswrapper[4840]: E0129 12:05:32.000761 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.057290 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.057347 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.057359 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.057376 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.057388 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:32Z","lastTransitionTime":"2026-01-29T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.159780 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.159834 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.159846 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.159862 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.159870 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:32Z","lastTransitionTime":"2026-01-29T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.262898 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.263007 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.263027 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.263058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.263079 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:32Z","lastTransitionTime":"2026-01-29T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.365639 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.365681 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.365694 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.365709 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.365720 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:32Z","lastTransitionTime":"2026-01-29T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.468329 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.468370 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.468379 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.468394 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.468405 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:32Z","lastTransitionTime":"2026-01-29T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.570435 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.570478 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.570488 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.570503 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.570513 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:32Z","lastTransitionTime":"2026-01-29T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.673232 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.673274 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.673285 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.673302 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.673312 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:32Z","lastTransitionTime":"2026-01-29T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.776092 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.776147 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.776157 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.776174 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.776184 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:32Z","lastTransitionTime":"2026-01-29T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.878326 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.878368 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.878377 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.878392 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.878402 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:32Z","lastTransitionTime":"2026-01-29T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.980833 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.980876 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.980887 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.980906 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.980918 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:32Z","lastTransitionTime":"2026-01-29T12:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:32 crc kubenswrapper[4840]: I0129 12:05:32.984065 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 20:34:02.198842867 +0000 UTC Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.000459 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:33 crc kubenswrapper[4840]: E0129 12:05:33.000622 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.000826 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:33 crc kubenswrapper[4840]: E0129 12:05:33.000874 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.001114 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:33 crc kubenswrapper[4840]: E0129 12:05:33.001184 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.083787 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.083847 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.083864 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.083887 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.083900 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:33Z","lastTransitionTime":"2026-01-29T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.187248 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.187321 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.187336 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.187381 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.187401 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:33Z","lastTransitionTime":"2026-01-29T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.289708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.289765 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.289777 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.289796 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.289807 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:33Z","lastTransitionTime":"2026-01-29T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.393278 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.393316 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.393324 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.393337 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.393345 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:33Z","lastTransitionTime":"2026-01-29T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.495190 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.495221 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.495229 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.495243 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.495252 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:33Z","lastTransitionTime":"2026-01-29T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.597031 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.597066 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.597075 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.597089 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.597098 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:33Z","lastTransitionTime":"2026-01-29T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.699989 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.700044 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.700053 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.700071 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.700080 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:33Z","lastTransitionTime":"2026-01-29T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.802578 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.802614 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.802622 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.802637 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.802647 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:33Z","lastTransitionTime":"2026-01-29T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.904535 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.904568 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.904578 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.904591 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.904600 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:33Z","lastTransitionTime":"2026-01-29T12:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:33 crc kubenswrapper[4840]: I0129 12:05:33.985049 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 22:31:28.787582257 +0000 UTC Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.000248 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:34 crc kubenswrapper[4840]: E0129 12:05:34.000392 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.006847 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.006893 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.006905 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.006923 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.006936 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:34Z","lastTransitionTime":"2026-01-29T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.108877 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.108911 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.108919 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.108933 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.108995 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:34Z","lastTransitionTime":"2026-01-29T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.210640 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.210683 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.210691 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.210708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.210717 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:34Z","lastTransitionTime":"2026-01-29T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.313153 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.313205 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.313214 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.313231 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.313241 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:34Z","lastTransitionTime":"2026-01-29T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.416252 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.416301 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.416312 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.416339 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.416352 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:34Z","lastTransitionTime":"2026-01-29T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.518791 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.518836 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.518845 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.518859 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.518870 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:34Z","lastTransitionTime":"2026-01-29T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.621067 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.621119 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.621127 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.621141 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.621152 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:34Z","lastTransitionTime":"2026-01-29T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.723817 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.723869 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.723881 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.723900 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.723912 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:34Z","lastTransitionTime":"2026-01-29T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.827092 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.827135 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.827147 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.827164 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.827177 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:34Z","lastTransitionTime":"2026-01-29T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.929311 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.929354 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.929363 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.929377 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.929387 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:34Z","lastTransitionTime":"2026-01-29T12:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:34 crc kubenswrapper[4840]: I0129 12:05:34.986037 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 03:34:02.104691829 +0000 UTC Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.000460 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.000553 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:35 crc kubenswrapper[4840]: E0129 12:05:35.000624 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:35 crc kubenswrapper[4840]: E0129 12:05:35.000682 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.001100 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:35 crc kubenswrapper[4840]: E0129 12:05:35.001265 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.001470 4840 scope.go:117] "RemoveContainer" containerID="4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2" Jan 29 12:05:35 crc kubenswrapper[4840]: E0129 12:05:35.001710 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.032621 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.032682 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.032695 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.032714 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.032727 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.135162 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.135207 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.135222 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.135239 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.135249 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.237782 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.237823 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.237832 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.237847 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.237857 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.340592 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.340638 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.340649 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.340666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.340679 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.442435 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.442464 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.442473 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.442487 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.442495 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.544533 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.544614 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.544626 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.544646 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.544657 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.647400 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.647459 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.647469 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.647485 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.647496 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.749770 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.749821 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.749832 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.749851 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.749888 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.809608 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.809655 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.809667 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.809685 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.809696 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: E0129 12:05:35.826803 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.831683 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.831766 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.831779 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.831798 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.831836 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: E0129 12:05:35.845098 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.849034 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.849081 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.849091 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.849107 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.849118 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: E0129 12:05:35.861878 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.867422 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.867452 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.867461 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.867475 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.867485 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: E0129 12:05:35.880617 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.884866 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.884896 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.884905 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.884920 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.884929 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: E0129 12:05:35.899338 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:35Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:35 crc kubenswrapper[4840]: E0129 12:05:35.899478 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.901346 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.901383 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.901393 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.901410 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.901421 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:35Z","lastTransitionTime":"2026-01-29T12:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:35 crc kubenswrapper[4840]: I0129 12:05:35.986994 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 05:17:36.890496228 +0000 UTC Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.001058 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:36 crc kubenswrapper[4840]: E0129 12:05:36.001190 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.003475 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.003515 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.003527 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.003545 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.003556 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:36Z","lastTransitionTime":"2026-01-29T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.105774 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.105821 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.105830 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.105847 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.105857 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:36Z","lastTransitionTime":"2026-01-29T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.208341 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.208375 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.208383 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.208396 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.208405 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:36Z","lastTransitionTime":"2026-01-29T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.310266 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.310341 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.310353 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.310384 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.310396 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:36Z","lastTransitionTime":"2026-01-29T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.412889 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.412932 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.412961 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.412978 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.412990 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:36Z","lastTransitionTime":"2026-01-29T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.515466 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.515527 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.515536 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.515551 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.515562 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:36Z","lastTransitionTime":"2026-01-29T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.617963 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.618005 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.618016 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.618035 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.618045 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:36Z","lastTransitionTime":"2026-01-29T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.721627 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.721684 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.721693 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.721712 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.721725 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:36Z","lastTransitionTime":"2026-01-29T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.823661 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.823709 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.823721 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.823736 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.823747 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:36Z","lastTransitionTime":"2026-01-29T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.925985 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.926033 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.926045 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.926060 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.926071 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:36Z","lastTransitionTime":"2026-01-29T12:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:36 crc kubenswrapper[4840]: I0129 12:05:36.987675 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:54:47.041352811 +0000 UTC Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.001071 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:37 crc kubenswrapper[4840]: E0129 12:05:37.001244 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.001183 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:37 crc kubenswrapper[4840]: E0129 12:05:37.001459 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.001614 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:37 crc kubenswrapper[4840]: E0129 12:05:37.001697 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.028297 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.028340 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.028352 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.028371 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.028384 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:37Z","lastTransitionTime":"2026-01-29T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.131333 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.131385 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.131397 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.131415 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.131426 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:37Z","lastTransitionTime":"2026-01-29T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.233536 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.233573 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.233588 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.233604 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.233619 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:37Z","lastTransitionTime":"2026-01-29T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.235980 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:37 crc kubenswrapper[4840]: E0129 12:05:37.236086 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:37 crc kubenswrapper[4840]: E0129 12:05:37.236134 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs podName:c69a828d-5ed4-45ac-95a4-f0cc698d6992 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:09.236120004 +0000 UTC m=+100.899099897 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs") pod "network-metrics-daemon-mnzvc" (UID: "c69a828d-5ed4-45ac-95a4-f0cc698d6992") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.336088 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.337237 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.337271 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.337290 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.337309 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:37Z","lastTransitionTime":"2026-01-29T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.439432 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.442069 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.442085 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.442115 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.442128 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:37Z","lastTransitionTime":"2026-01-29T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.544588 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.544626 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.544639 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.544659 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.544671 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:37Z","lastTransitionTime":"2026-01-29T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.646849 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.646898 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.646909 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.646927 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.646939 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:37Z","lastTransitionTime":"2026-01-29T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.749061 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.749104 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.749113 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.749642 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.749673 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:37Z","lastTransitionTime":"2026-01-29T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.852677 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.852708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.852716 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.852730 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.852739 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:37Z","lastTransitionTime":"2026-01-29T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.955486 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.955533 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.955543 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.955558 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.955567 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:37Z","lastTransitionTime":"2026-01-29T12:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:37 crc kubenswrapper[4840]: I0129 12:05:37.988818 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:30:49.11386869 +0000 UTC Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.001116 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:38 crc kubenswrapper[4840]: E0129 12:05:38.001267 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.058352 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.058397 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.058408 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.058426 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.058438 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:38Z","lastTransitionTime":"2026-01-29T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.160931 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.160995 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.161005 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.161020 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.161029 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:38Z","lastTransitionTime":"2026-01-29T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.263268 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.263327 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.263340 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.263363 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.263374 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:38Z","lastTransitionTime":"2026-01-29T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.365430 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.365470 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.365481 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.365498 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.365509 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:38Z","lastTransitionTime":"2026-01-29T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.468261 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.468288 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.468297 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.468310 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.468319 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:38Z","lastTransitionTime":"2026-01-29T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.571556 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.571596 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.571608 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.571625 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.571639 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:38Z","lastTransitionTime":"2026-01-29T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.673997 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.674043 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.674053 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.674070 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.674081 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:38Z","lastTransitionTime":"2026-01-29T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.776620 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.776707 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.776719 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.776736 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.776746 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:38Z","lastTransitionTime":"2026-01-29T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.878977 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.879023 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.879032 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.879048 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.879056 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:38Z","lastTransitionTime":"2026-01-29T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.981222 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.981262 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.981274 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.981291 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.981301 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:38Z","lastTransitionTime":"2026-01-29T12:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:38 crc kubenswrapper[4840]: I0129 12:05:38.989241 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 22:58:21.219694465 +0000 UTC Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.001137 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.001224 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.001270 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:39 crc kubenswrapper[4840]: E0129 12:05:39.001408 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:39 crc kubenswrapper[4840]: E0129 12:05:39.001471 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:39 crc kubenswrapper[4840]: E0129 12:05:39.001554 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.013359 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.026358 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.037055 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.047906 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.061467 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.076581 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.083071 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.083112 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.083123 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.083139 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.083148 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:39Z","lastTransitionTime":"2026-01-29T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.090302 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.103128 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.115536 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.131081 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.143246 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e32df1-e061-4ac1-a318-f30f1b932753\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c25958324e3f49f3ef2b61f0779122b937c6ffbebb33cd4dd408367a72469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cafbe8e32846faab26909f99cf3a15e6f463468d5f2059e192973f549bb0c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a633ce55ec465156590cd31b8f4f646588898101a5fa52960276e00e63e73c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.155485 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.166633 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.183885 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.184695 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.184729 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.184741 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.184756 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.184767 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:39Z","lastTransitionTime":"2026-01-29T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.198828 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.217933 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:23Z\\\",\\\"message\\\":\\\"ss event on pod openshift-etcd/etcd-crc\\\\nI0129 12:05:23.031599 6482 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0129 12:05:23.031847 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0129 12:05:23.031848 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:05:23.031854 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 12:05:23.031743 6482 obj_retry.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.229422 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.240492 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:39Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.288507 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.288539 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.288548 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.288563 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.288574 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:39Z","lastTransitionTime":"2026-01-29T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.390860 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.390907 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.390921 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.390940 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.390970 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:39Z","lastTransitionTime":"2026-01-29T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.493545 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.493586 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.493595 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.493614 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.493622 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:39Z","lastTransitionTime":"2026-01-29T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.595561 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.595600 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.595611 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.595626 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.595637 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:39Z","lastTransitionTime":"2026-01-29T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.697888 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.698228 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.698325 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.698430 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.698518 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:39Z","lastTransitionTime":"2026-01-29T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.801585 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.801826 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.801900 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.801988 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.802060 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:39Z","lastTransitionTime":"2026-01-29T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.904428 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.904470 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.904481 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.904498 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.904510 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:39Z","lastTransitionTime":"2026-01-29T12:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:39 crc kubenswrapper[4840]: I0129 12:05:39.989807 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 10:31:30.973118429 +0000 UTC Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.001190 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:40 crc kubenswrapper[4840]: E0129 12:05:40.001441 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.007405 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.007547 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.007766 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.007998 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.008214 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:40Z","lastTransitionTime":"2026-01-29T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.111265 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.111509 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.111588 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.111674 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.111753 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:40Z","lastTransitionTime":"2026-01-29T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.214445 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.214639 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.214742 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.214824 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.214893 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:40Z","lastTransitionTime":"2026-01-29T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.317662 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.317725 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.317739 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.317759 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.317773 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:40Z","lastTransitionTime":"2026-01-29T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.419910 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.420154 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.420221 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.420308 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.420376 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:40Z","lastTransitionTime":"2026-01-29T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.522462 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.522743 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.522857 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.522985 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.523135 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:40Z","lastTransitionTime":"2026-01-29T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.625604 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.625653 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.625664 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.625682 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.625693 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:40Z","lastTransitionTime":"2026-01-29T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.727630 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.727661 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.727671 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.727684 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.727694 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:40Z","lastTransitionTime":"2026-01-29T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.830300 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.830342 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.830353 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.830373 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.830382 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:40Z","lastTransitionTime":"2026-01-29T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.933010 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.933058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.933071 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.933089 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.933100 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:40Z","lastTransitionTime":"2026-01-29T12:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:40 crc kubenswrapper[4840]: I0129 12:05:40.991196 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 14:52:54.569961165 +0000 UTC Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.000789 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.000854 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:41 crc kubenswrapper[4840]: E0129 12:05:41.000959 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.001002 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:41 crc kubenswrapper[4840]: E0129 12:05:41.001067 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:41 crc kubenswrapper[4840]: E0129 12:05:41.001222 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.035156 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.035200 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.035211 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.035229 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.035241 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:41Z","lastTransitionTime":"2026-01-29T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.136907 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.136970 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.136988 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.137004 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.137014 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:41Z","lastTransitionTime":"2026-01-29T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.238626 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.238665 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.238675 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.238689 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.238698 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:41Z","lastTransitionTime":"2026-01-29T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.341510 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.341546 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.341555 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.341571 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.341581 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:41Z","lastTransitionTime":"2026-01-29T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.444087 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.444130 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.444140 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.444155 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.444165 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:41Z","lastTransitionTime":"2026-01-29T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.546761 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.546796 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.546806 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.546821 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.546832 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:41Z","lastTransitionTime":"2026-01-29T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.649733 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.649779 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.649795 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.649817 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.649875 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:41Z","lastTransitionTime":"2026-01-29T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.752068 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.752115 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.752146 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.752165 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.752177 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:41Z","lastTransitionTime":"2026-01-29T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.854660 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.854700 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.854710 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.854724 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.854734 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:41Z","lastTransitionTime":"2026-01-29T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.957617 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.957655 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.957662 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.957678 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.957688 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:41Z","lastTransitionTime":"2026-01-29T12:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:41 crc kubenswrapper[4840]: I0129 12:05:41.992129 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 06:01:27.12302144 +0000 UTC Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.000389 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:42 crc kubenswrapper[4840]: E0129 12:05:42.000498 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.060683 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.060729 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.060739 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.060755 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.060766 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:42Z","lastTransitionTime":"2026-01-29T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.164286 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.164352 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.164393 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.164425 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.164460 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:42Z","lastTransitionTime":"2026-01-29T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.267489 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.267524 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.267532 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.267546 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.267556 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:42Z","lastTransitionTime":"2026-01-29T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.370520 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.370565 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.370575 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.370590 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.370599 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:42Z","lastTransitionTime":"2026-01-29T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.473774 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.474101 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.474177 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.474258 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.474338 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:42Z","lastTransitionTime":"2026-01-29T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.577097 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.577376 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.577444 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.577517 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.577576 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:42Z","lastTransitionTime":"2026-01-29T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.680715 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.680761 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.680771 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.680787 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.680800 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:42Z","lastTransitionTime":"2026-01-29T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.783394 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.783435 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.783446 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.783462 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.783473 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:42Z","lastTransitionTime":"2026-01-29T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.887073 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.887154 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.887172 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.887209 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.887551 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:42Z","lastTransitionTime":"2026-01-29T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.990523 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.990588 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.990601 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.990657 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.990676 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:42Z","lastTransitionTime":"2026-01-29T12:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:42 crc kubenswrapper[4840]: I0129 12:05:42.992709 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 01:03:22.355316317 +0000 UTC Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.000585 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.000613 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.000685 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:43 crc kubenswrapper[4840]: E0129 12:05:43.000783 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:43 crc kubenswrapper[4840]: E0129 12:05:43.000881 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:43 crc kubenswrapper[4840]: E0129 12:05:43.001068 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.093011 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.093051 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.093064 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.093082 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.093095 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:43Z","lastTransitionTime":"2026-01-29T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.196613 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.196681 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.196693 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.196712 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.196725 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:43Z","lastTransitionTime":"2026-01-29T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.299690 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.299760 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.299780 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.299808 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.299834 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:43Z","lastTransitionTime":"2026-01-29T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.403075 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.403142 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.403155 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.403177 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.403211 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:43Z","lastTransitionTime":"2026-01-29T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.423508 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zc5r_d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969/kube-multus/0.log" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.423579 4840 generic.go:334] "Generic (PLEG): container finished" podID="d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969" containerID="60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62" exitCode=1 Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.423628 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zc5r" event={"ID":"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969","Type":"ContainerDied","Data":"60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62"} Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.424120 4840 scope.go:117] "RemoveContainer" containerID="60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.460247 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.476458 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:43Z\\\",\\\"message\\\":\\\"2026-01-29T12:04:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b\\\\n2026-01-29T12:04:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b to /host/opt/cni/bin/\\\\n2026-01-29T12:04:58Z [verbose] multus-daemon started\\\\n2026-01-29T12:04:58Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:05:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.491381 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.507921 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.508431 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.508477 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.508488 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.508504 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.508516 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:43Z","lastTransitionTime":"2026-01-29T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.524706 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e32df1-e061-4ac1-a318-f30f1b932753\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c25958324e3f49f3ef2b61f0779122b937c6ffbebb33cd4dd408367a72469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cafbe8e32846faab26909f99cf3a15e6f463468d5f2059e192973f549bb0c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a633ce55ec465156590cd31b8f4f646588898101a5fa52960276e00e63e73c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.545518 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.562918 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.577475 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.601992 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.612404 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.612465 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.612482 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.612503 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.612516 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:43Z","lastTransitionTime":"2026-01-29T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.618529 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.639156 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:23Z\\\",\\\"message\\\":\\\"ss event on pod openshift-etcd/etcd-crc\\\\nI0129 12:05:23.031599 6482 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0129 12:05:23.031847 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0129 12:05:23.031848 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:05:23.031854 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 12:05:23.031743 6482 obj_retry.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.653712 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.667639 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.680305 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.695933 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.706151 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.714549 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.714602 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.714615 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.714634 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.714645 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:43Z","lastTransitionTime":"2026-01-29T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.721972 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.733457 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:43Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.816508 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.816558 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.816567 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.816582 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.816592 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:43Z","lastTransitionTime":"2026-01-29T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.919303 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.919350 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.919363 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.919380 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.919391 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:43Z","lastTransitionTime":"2026-01-29T12:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:43 crc kubenswrapper[4840]: I0129 12:05:43.993386 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 16:59:53.561655706 +0000 UTC Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.000753 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:44 crc kubenswrapper[4840]: E0129 12:05:44.000857 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.021617 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.021674 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.021684 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.021698 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.021709 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:44Z","lastTransitionTime":"2026-01-29T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.126614 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.126687 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.126698 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.126716 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.126730 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:44Z","lastTransitionTime":"2026-01-29T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.229700 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.229754 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.229763 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.229781 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.229795 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:44Z","lastTransitionTime":"2026-01-29T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.331829 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.331892 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.331907 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.331924 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.331935 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:44Z","lastTransitionTime":"2026-01-29T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.428629 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zc5r_d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969/kube-multus/0.log" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.428682 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zc5r" event={"ID":"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969","Type":"ContainerStarted","Data":"ff435f9ad41605de7d8db59d05550dd1a331288be6ba4826196a970d5af81a06"} Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.433655 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.433687 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.433695 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.433709 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.433719 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:44Z","lastTransitionTime":"2026-01-29T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.443384 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.453597 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.466824 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.477898 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.489036 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.511415 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.526754 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e32df1-e061-4ac1-a318-f30f1b932753\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c25958324e3f49f3ef2b61f0779122b937c6ffbebb33cd4dd408367a72469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cafbe8e32846faab26909f99cf3a15e6f463468d5f2059e192973f549bb0c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a633ce55ec465156590cd31b8f4f646588898101a5fa52960276e00e63e73c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.537263 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.537311 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.537323 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.537346 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.537358 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:44Z","lastTransitionTime":"2026-01-29T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.545468 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.559994 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.574279 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.588974 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff435f9ad41605de7d8db59d05550dd1a331288be6ba4826196a970d5af81a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:43Z\\\",\\\"message\\\":\\\"2026-01-29T12:04:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b\\\\n2026-01-29T12:04:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b to /host/opt/cni/bin/\\\\n2026-01-29T12:04:58Z [verbose] multus-daemon started\\\\n2026-01-29T12:04:58Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:05:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.602923 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.615575 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.628422 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.640441 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.640488 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.640503 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.640523 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.640534 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:44Z","lastTransitionTime":"2026-01-29T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.649032 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:23Z\\\",\\\"message\\\":\\\"ss event on pod openshift-etcd/etcd-crc\\\\nI0129 12:05:23.031599 6482 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0129 12:05:23.031847 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0129 12:05:23.031848 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:05:23.031854 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 12:05:23.031743 6482 obj_retry.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.674891 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.689138 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.704399 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:44Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.742783 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.742843 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.742861 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.742886 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.742906 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:44Z","lastTransitionTime":"2026-01-29T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.847102 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.847191 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.847221 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.847260 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.847286 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:44Z","lastTransitionTime":"2026-01-29T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.949999 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.950037 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.950045 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.950058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.950069 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:44Z","lastTransitionTime":"2026-01-29T12:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:44 crc kubenswrapper[4840]: I0129 12:05:44.993852 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 12:13:21.24036718 +0000 UTC Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.003621 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:45 crc kubenswrapper[4840]: E0129 12:05:45.003809 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.004119 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:45 crc kubenswrapper[4840]: E0129 12:05:45.004186 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.004335 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:45 crc kubenswrapper[4840]: E0129 12:05:45.004405 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.052068 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.052130 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.052140 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.052154 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.052164 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:45Z","lastTransitionTime":"2026-01-29T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.154256 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.154297 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.154308 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.154351 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.154365 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:45Z","lastTransitionTime":"2026-01-29T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.257594 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.257659 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.257678 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.257701 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.257718 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:45Z","lastTransitionTime":"2026-01-29T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.360664 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.360705 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.360716 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.360732 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.360742 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:45Z","lastTransitionTime":"2026-01-29T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.462932 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.463004 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.463015 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.463031 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.463042 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:45Z","lastTransitionTime":"2026-01-29T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.565381 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.565441 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.565456 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.565478 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.565494 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:45Z","lastTransitionTime":"2026-01-29T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.667631 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.667680 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.667691 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.667709 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.667719 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:45Z","lastTransitionTime":"2026-01-29T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.769587 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.769632 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.769642 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.769658 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.769667 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:45Z","lastTransitionTime":"2026-01-29T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.871887 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.871958 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.871970 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.871987 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.871997 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:45Z","lastTransitionTime":"2026-01-29T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.974186 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.974216 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.974225 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.974237 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.974262 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:45Z","lastTransitionTime":"2026-01-29T12:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:45 crc kubenswrapper[4840]: I0129 12:05:45.994413 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 08:11:15.472150663 +0000 UTC Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.000772 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:46 crc kubenswrapper[4840]: E0129 12:05:46.000971 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.076904 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.076975 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.076991 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.077009 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.077019 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.179030 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.179077 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.179087 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.179103 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.179113 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.182342 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.182417 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.182431 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.182458 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.182476 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: E0129 12:05:46.202191 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:46Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.206175 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.206231 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.206243 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.206262 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.206274 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: E0129 12:05:46.221366 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:46Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.228656 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.228714 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.228735 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.228757 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.228770 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: E0129 12:05:46.244231 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:46Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.248307 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.248361 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.248378 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.248399 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.248412 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: E0129 12:05:46.261927 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:46Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.265847 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.265886 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.265897 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.265912 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.265924 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: E0129 12:05:46.279669 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:46Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:46 crc kubenswrapper[4840]: E0129 12:05:46.279800 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.281594 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.281653 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.281666 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.281684 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.281718 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.384256 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.384366 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.384390 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.384418 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.384440 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.487641 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.487692 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.487708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.487730 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.487743 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.590627 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.590705 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.590723 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.590750 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.590769 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.693168 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.693249 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.693273 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.693325 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.693351 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.795703 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.795769 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.795789 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.795812 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.796034 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.898677 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.898704 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.898711 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.898724 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.898733 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:46Z","lastTransitionTime":"2026-01-29T12:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:46 crc kubenswrapper[4840]: I0129 12:05:46.995006 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 07:15:00.107048162 +0000 UTC Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.000785 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.000808 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.000817 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.000829 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.000839 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:47Z","lastTransitionTime":"2026-01-29T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.103057 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.103087 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.103095 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.103108 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.103117 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:47Z","lastTransitionTime":"2026-01-29T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.205567 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.205605 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.205615 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.205630 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.205641 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:47Z","lastTransitionTime":"2026-01-29T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.307778 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.307807 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.307816 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.307828 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.307837 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:47Z","lastTransitionTime":"2026-01-29T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.409643 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.409672 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.409680 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.409693 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.409701 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:47Z","lastTransitionTime":"2026-01-29T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.511816 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.511850 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.511859 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.511873 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.511884 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:47Z","lastTransitionTime":"2026-01-29T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.537973 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.538027 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.538044 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.538128 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:47 crc kubenswrapper[4840]: E0129 12:05:47.538259 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:47 crc kubenswrapper[4840]: E0129 12:05:47.538358 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:47 crc kubenswrapper[4840]: E0129 12:05:47.538432 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:47 crc kubenswrapper[4840]: E0129 12:05:47.538538 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.613999 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.614092 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.614108 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.614169 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.614185 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:47Z","lastTransitionTime":"2026-01-29T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.716919 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.716962 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.716971 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.716983 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.716993 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:47Z","lastTransitionTime":"2026-01-29T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.819568 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.819612 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.819628 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.819646 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.819660 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:47Z","lastTransitionTime":"2026-01-29T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.921447 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.921498 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.921514 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.921534 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.921549 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:47Z","lastTransitionTime":"2026-01-29T12:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:47 crc kubenswrapper[4840]: I0129 12:05:47.995704 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 07:07:41.815881024 +0000 UTC Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.024006 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.024045 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.024055 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.024072 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.024083 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:48Z","lastTransitionTime":"2026-01-29T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.125760 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.125818 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.125830 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.125863 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.125875 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:48Z","lastTransitionTime":"2026-01-29T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.228010 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.228380 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.228469 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.228599 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.228674 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:48Z","lastTransitionTime":"2026-01-29T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.330937 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.330997 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.331006 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.331024 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.331035 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:48Z","lastTransitionTime":"2026-01-29T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.434452 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.434530 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.434542 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.434558 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.434573 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:48Z","lastTransitionTime":"2026-01-29T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.536636 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.536676 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.536686 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.536702 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.536712 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:48Z","lastTransitionTime":"2026-01-29T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.639048 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.639091 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.639099 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.639111 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.639120 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:48Z","lastTransitionTime":"2026-01-29T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.742226 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.742274 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.742287 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.742310 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.742325 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:48Z","lastTransitionTime":"2026-01-29T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.845458 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.845514 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.845531 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.845551 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.845564 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:48Z","lastTransitionTime":"2026-01-29T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.947571 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.947617 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.947626 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.947640 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.947652 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:48Z","lastTransitionTime":"2026-01-29T12:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:48 crc kubenswrapper[4840]: I0129 12:05:48.996176 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 18:37:57.38019432 +0000 UTC Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.000424 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.000424 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.000533 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:49 crc kubenswrapper[4840]: E0129 12:05:49.000698 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.000727 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:49 crc kubenswrapper[4840]: E0129 12:05:49.000817 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:49 crc kubenswrapper[4840]: E0129 12:05:49.000877 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:49 crc kubenswrapper[4840]: E0129 12:05:49.000918 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.020902 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.037809 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.050041 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.050107 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.050119 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.050137 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.050147 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:49Z","lastTransitionTime":"2026-01-29T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.061787 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:23Z\\\",\\\"message\\\":\\\"ss event on pod openshift-etcd/etcd-crc\\\\nI0129 12:05:23.031599 6482 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0129 12:05:23.031847 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0129 12:05:23.031848 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:05:23.031854 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 12:05:23.031743 6482 obj_retry.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.076872 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.092284 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.122892 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.151338 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.152795 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.152851 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.152865 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.152892 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.152904 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:49Z","lastTransitionTime":"2026-01-29T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.168883 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.186544 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.198764 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.211307 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.227705 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.240432 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e32df1-e061-4ac1-a318-f30f1b932753\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c25958324e3f49f3ef2b61f0779122b937c6ffbebb33cd4dd408367a72469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cafbe8e32846faab26909f99cf3a15e6f463468d5f2059e192973f549bb0c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a633ce55ec465156590cd31b8f4f646588898101a5fa52960276e00e63e73c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.252707 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.256217 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.256263 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.256304 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.256495 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.256523 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:49Z","lastTransitionTime":"2026-01-29T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.266528 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.280365 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.293800 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff435f9ad41605de7d8db59d05550dd1a331288be6ba4826196a970d5af81a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:43Z\\\",\\\"message\\\":\\\"2026-01-29T12:04:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b\\\\n2026-01-29T12:04:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b to /host/opt/cni/bin/\\\\n2026-01-29T12:04:58Z [verbose] multus-daemon started\\\\n2026-01-29T12:04:58Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:05:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.306507 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:49Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.359459 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.359506 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.359517 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.359540 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.359553 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:49Z","lastTransitionTime":"2026-01-29T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.463129 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.463188 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.463201 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.463219 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.463232 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:49Z","lastTransitionTime":"2026-01-29T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.567088 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.567525 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.567548 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.568002 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.568057 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:49Z","lastTransitionTime":"2026-01-29T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.670413 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.670450 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.670458 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.670470 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.670479 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:49Z","lastTransitionTime":"2026-01-29T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.772105 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.772383 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.772462 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.772542 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.772629 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:49Z","lastTransitionTime":"2026-01-29T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.874474 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.874698 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.874760 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.874818 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.874877 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:49Z","lastTransitionTime":"2026-01-29T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.977107 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.977375 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.977491 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.977618 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.977749 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:49Z","lastTransitionTime":"2026-01-29T12:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:49 crc kubenswrapper[4840]: I0129 12:05:49.997358 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 14:30:03.369536122 +0000 UTC Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.001370 4840 scope.go:117] "RemoveContainer" containerID="4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.080830 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.080881 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.080890 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.080909 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.080920 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:50Z","lastTransitionTime":"2026-01-29T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.184039 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.184102 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.184113 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.184130 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.184141 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:50Z","lastTransitionTime":"2026-01-29T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.286797 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.286842 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.286853 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.286900 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.286920 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:50Z","lastTransitionTime":"2026-01-29T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.389094 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.389133 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.389143 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.389159 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.389170 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:50Z","lastTransitionTime":"2026-01-29T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.491499 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.491535 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.491544 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.491557 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.491566 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:50Z","lastTransitionTime":"2026-01-29T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.550920 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/2.log" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.553283 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548"} Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.554221 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.567255 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.581882 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.593183 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.594211 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.594246 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.594258 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.594275 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.594286 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:50Z","lastTransitionTime":"2026-01-29T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.603781 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.622253 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.641081 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.661223 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.674470 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff435f9ad41605de7d8db59d05550dd1a331288be6ba4826196a970d5af81a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:43Z\\\",\\\"message\\\":\\\"2026-01-29T12:04:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b\\\\n2026-01-29T12:04:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b to /host/opt/cni/bin/\\\\n2026-01-29T12:04:58Z [verbose] multus-daemon started\\\\n2026-01-29T12:04:58Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:05:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.689632 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.697054 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.697086 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.697095 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.697108 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.697118 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:50Z","lastTransitionTime":"2026-01-29T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.710342 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.723053 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e32df1-e061-4ac1-a318-f30f1b932753\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c25958324e3f49f3ef2b61f0779122b937c6ffbebb33cd4dd408367a72469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cafbe8e32846faab26909f99cf3a15e6f463468d5f2059e192973f549bb0c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a633ce55ec465156590cd31b8f4f646588898101a5fa52960276e00e63e73c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.737562 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.750150 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.769780 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.784550 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.799297 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.799331 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.799340 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.799356 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.799368 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:50Z","lastTransitionTime":"2026-01-29T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.806770 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:23Z\\\",\\\"message\\\":\\\"ss event on pod openshift-etcd/etcd-crc\\\\nI0129 12:05:23.031599 6482 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0129 12:05:23.031847 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0129 12:05:23.031848 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:05:23.031854 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 12:05:23.031743 6482 obj_retry.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.836312 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.851224 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:50Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.901531 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.901560 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.901567 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.901580 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.901588 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:50Z","lastTransitionTime":"2026-01-29T12:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:50 crc kubenswrapper[4840]: I0129 12:05:50.997839 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 23:32:54.205698346 +0000 UTC Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.002220 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:51 crc kubenswrapper[4840]: E0129 12:05:51.002352 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.002446 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.002493 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:51 crc kubenswrapper[4840]: E0129 12:05:51.002588 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.002531 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:51 crc kubenswrapper[4840]: E0129 12:05:51.002821 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:51 crc kubenswrapper[4840]: E0129 12:05:51.002849 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.003769 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.003819 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.003840 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.003872 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.003895 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:51Z","lastTransitionTime":"2026-01-29T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.106967 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.107431 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.107497 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.107601 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.107723 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:51Z","lastTransitionTime":"2026-01-29T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.210031 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.210295 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.210382 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.210452 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.210509 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:51Z","lastTransitionTime":"2026-01-29T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.312712 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.312906 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.312986 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.313049 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.313106 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:51Z","lastTransitionTime":"2026-01-29T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.415485 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.415567 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.415579 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.415602 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.415615 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:51Z","lastTransitionTime":"2026-01-29T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.518643 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.518683 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.518692 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.518708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.518720 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:51Z","lastTransitionTime":"2026-01-29T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.562828 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/3.log" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.563793 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/2.log" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.568351 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548" exitCode=1 Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.568425 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548"} Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.568502 4840 scope.go:117] "RemoveContainer" containerID="4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.569155 4840 scope.go:117] "RemoveContainer" containerID="8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548" Jan 29 12:05:51 crc kubenswrapper[4840]: E0129 12:05:51.569355 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.585260 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.597745 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.611665 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.621332 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.621457 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.621548 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.621659 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.621739 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:51Z","lastTransitionTime":"2026-01-29T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.629112 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.642208 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.656822 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.669408 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e32df1-e061-4ac1-a318-f30f1b932753\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c25958324e3f49f3ef2b61f0779122b937c6ffbebb33cd4dd408367a72469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cafbe8e32846faab26909f99cf3a15e6f463468d5f2059e192973f549bb0c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a633ce55ec465156590cd31b8f4f646588898101a5fa52960276e00e63e73c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.684907 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.700902 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.716450 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.724251 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.724356 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.724373 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.724392 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.724402 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:51Z","lastTransitionTime":"2026-01-29T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.730650 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff435f9ad41605de7d8db59d05550dd1a331288be6ba4826196a970d5af81a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:43Z\\\",\\\"message\\\":\\\"2026-01-29T12:04:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b\\\\n2026-01-29T12:04:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b to /host/opt/cni/bin/\\\\n2026-01-29T12:04:58Z [verbose] multus-daemon started\\\\n2026-01-29T12:04:58Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:05:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.741366 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.753916 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.774119 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.788099 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.812050 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4d1d96debbea8bae781b3bf88d19775a3c41491a72f22118d3d9e86157133fd2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:23Z\\\",\\\"message\\\":\\\"ss event on pod openshift-etcd/etcd-crc\\\\nI0129 12:05:23.031599 6482 obj_retry.go:386] Retry successful for *v1.Pod openshift-kube-apiserver/kube-apiserver-crc after 0 failed attempt(s)\\\\nI0129 12:05:23.031847 6482 obj_retry.go:303] Retry object setup: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nF0129 12:05:23.031848 6482 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:23Z is after 2025-08-24T17:21:41Z]\\\\nI0129 12:05:23.031854 6482 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-console/networking-console-plugin-85b44fc459-gdk6g\\\\nI0129 12:05:23.031743 6482 obj_retry.\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:51Z\\\",\\\"message\\\":\\\"github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:51.311268 6913 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:05:51.311571 6913 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:51.312418 6913 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0129 12:05:51.312579 6913 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:51.343002 6913 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 12:05:51.343053 6913 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 12:05:51.343150 6913 ovnkube.go:599] Stopped ovnkube\\\\nI0129 12:05:51.343193 6913 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:05:51.343327 6913 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.827065 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.827123 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.827135 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.827150 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.827159 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:51Z","lastTransitionTime":"2026-01-29T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.827540 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.842475 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:51Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.929726 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.929760 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.929769 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.929783 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.929793 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:51Z","lastTransitionTime":"2026-01-29T12:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:51 crc kubenswrapper[4840]: I0129 12:05:51.998590 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 12:21:12.231807094 +0000 UTC Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.032570 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.032625 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.032656 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.032682 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.032697 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:52Z","lastTransitionTime":"2026-01-29T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.135310 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.135586 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.135686 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.135777 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.135867 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:52Z","lastTransitionTime":"2026-01-29T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.243138 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.243457 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.243697 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.243866 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.243938 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:52Z","lastTransitionTime":"2026-01-29T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.346872 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.346913 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.346924 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.346939 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.346970 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:52Z","lastTransitionTime":"2026-01-29T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.449716 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.449753 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.449765 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.449780 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.449791 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:52Z","lastTransitionTime":"2026-01-29T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.502934 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.503380 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:06:56.503352473 +0000 UTC m=+148.166332366 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.552198 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.552251 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.552268 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.552293 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.552315 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:52Z","lastTransitionTime":"2026-01-29T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.574970 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/3.log" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.579256 4840 scope.go:117] "RemoveContainer" containerID="8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548" Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.579541 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.603609 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.603655 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.603692 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.603721 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.603861 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.603878 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.603890 4840 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.603937 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:56.603921935 +0000 UTC m=+148.266901828 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.604162 4840 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.604200 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:56.604190613 +0000 UTC m=+148.267170516 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.604350 4840 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.604470 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:56.60445705 +0000 UTC m=+148.267436943 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.604353 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.604632 4840 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.604745 4840 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:05:52 crc kubenswrapper[4840]: E0129 12:05:52.604836 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:56.604825442 +0000 UTC m=+148.267805335 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.611150 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.630984 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.650721 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:51Z\\\",\\\"message\\\":\\\"github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:51.311268 6913 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:05:51.311571 6913 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:51.312418 6913 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0129 12:05:51.312579 6913 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:51.343002 6913 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 12:05:51.343053 6913 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 12:05:51.343150 6913 ovnkube.go:599] Stopped ovnkube\\\\nI0129 12:05:51.343193 6913 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:05:51.343327 6913 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.654853 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.654897 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.654913 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.654934 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.654979 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:52Z","lastTransitionTime":"2026-01-29T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.668438 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.685096 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.697656 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.713793 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.726906 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.738785 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.753435 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.756777 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.756809 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.756822 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.756838 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.756850 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:52Z","lastTransitionTime":"2026-01-29T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.766891 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.780691 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.796018 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff435f9ad41605de7d8db59d05550dd1a331288be6ba4826196a970d5af81a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:43Z\\\",\\\"message\\\":\\\"2026-01-29T12:04:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b\\\\n2026-01-29T12:04:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b to /host/opt/cni/bin/\\\\n2026-01-29T12:04:58Z [verbose] multus-daemon started\\\\n2026-01-29T12:04:58Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:05:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.807513 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.819470 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.831982 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e32df1-e061-4ac1-a318-f30f1b932753\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c25958324e3f49f3ef2b61f0779122b937c6ffbebb33cd4dd408367a72469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cafbe8e32846faab26909f99cf3a15e6f463468d5f2059e192973f549bb0c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a633ce55ec465156590cd31b8f4f646588898101a5fa52960276e00e63e73c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.845198 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.857083 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:52Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.858671 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.858720 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.858736 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.858755 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.858768 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:52Z","lastTransitionTime":"2026-01-29T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.961250 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.961284 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.961294 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.961308 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.961319 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:52Z","lastTransitionTime":"2026-01-29T12:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:52 crc kubenswrapper[4840]: I0129 12:05:52.999489 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 23:00:08.905943362 +0000 UTC Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.000804 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:53 crc kubenswrapper[4840]: E0129 12:05:53.000992 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.000829 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:53 crc kubenswrapper[4840]: E0129 12:05:53.001116 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.000804 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:53 crc kubenswrapper[4840]: E0129 12:05:53.001209 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.001006 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:53 crc kubenswrapper[4840]: E0129 12:05:53.001328 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.064161 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.064288 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.064313 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.064339 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.064355 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:53Z","lastTransitionTime":"2026-01-29T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.166495 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.166556 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.166567 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.166585 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.166597 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:53Z","lastTransitionTime":"2026-01-29T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.268684 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.268744 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.268759 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.268779 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.268793 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:53Z","lastTransitionTime":"2026-01-29T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.370937 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.371012 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.371025 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.371046 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.371060 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:53Z","lastTransitionTime":"2026-01-29T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.473867 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.473905 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.473917 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.473931 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.473940 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:53Z","lastTransitionTime":"2026-01-29T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.576669 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.576726 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.576747 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.576773 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.576789 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:53Z","lastTransitionTime":"2026-01-29T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.679184 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.679252 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.679271 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.679301 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.679319 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:53Z","lastTransitionTime":"2026-01-29T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.786718 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.786808 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.786824 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.786848 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.786864 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:53Z","lastTransitionTime":"2026-01-29T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.890299 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.890330 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.890339 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.890353 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.890362 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:53Z","lastTransitionTime":"2026-01-29T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.992460 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.992505 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.992513 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.992529 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:53 crc kubenswrapper[4840]: I0129 12:05:53.992539 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:53Z","lastTransitionTime":"2026-01-29T12:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.000635 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 13:55:53.617697703 +0000 UTC Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.095180 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.095221 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.095231 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.095246 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.095254 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:54Z","lastTransitionTime":"2026-01-29T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.197792 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.197848 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.197863 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.197881 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.197896 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:54Z","lastTransitionTime":"2026-01-29T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.300705 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.300746 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.300759 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.300776 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.300787 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:54Z","lastTransitionTime":"2026-01-29T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.403641 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.403687 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.403695 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.403709 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.403722 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:54Z","lastTransitionTime":"2026-01-29T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.505626 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.505667 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.505676 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.505690 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.505700 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:54Z","lastTransitionTime":"2026-01-29T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.607810 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.607867 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.607877 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.607894 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.607906 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:54Z","lastTransitionTime":"2026-01-29T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.711126 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.711181 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.711192 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.711212 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.711226 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:54Z","lastTransitionTime":"2026-01-29T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.814750 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.814820 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.814834 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.814858 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.814872 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:54Z","lastTransitionTime":"2026-01-29T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.917220 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.917260 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.917270 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.917288 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:54 crc kubenswrapper[4840]: I0129 12:05:54.917297 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:54Z","lastTransitionTime":"2026-01-29T12:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.000754 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.000829 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 10:05:23.706744093 +0000 UTC Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.000996 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:55 crc kubenswrapper[4840]: E0129 12:05:55.001099 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.001132 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.001180 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:55 crc kubenswrapper[4840]: E0129 12:05:55.001296 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:55 crc kubenswrapper[4840]: E0129 12:05:55.001374 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:55 crc kubenswrapper[4840]: E0129 12:05:55.001408 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.020328 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.020385 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.020405 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.020428 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.020446 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:55Z","lastTransitionTime":"2026-01-29T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.124554 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.124630 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.124655 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.124685 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.124705 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:55Z","lastTransitionTime":"2026-01-29T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.227503 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.227566 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.227587 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.227633 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.227664 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:55Z","lastTransitionTime":"2026-01-29T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.331173 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.331212 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.331222 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.331237 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.331247 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:55Z","lastTransitionTime":"2026-01-29T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.434298 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.434348 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.434359 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.434381 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.434395 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:55Z","lastTransitionTime":"2026-01-29T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.537108 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.537172 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.537184 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.537201 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.537214 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:55Z","lastTransitionTime":"2026-01-29T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.640465 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.640531 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.640550 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.640576 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.640592 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:55Z","lastTransitionTime":"2026-01-29T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.742885 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.742920 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.742929 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.742956 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.742965 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:55Z","lastTransitionTime":"2026-01-29T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.845572 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.845606 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.845658 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.845674 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.845684 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:55Z","lastTransitionTime":"2026-01-29T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.949906 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.950051 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.950076 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.950107 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:55 crc kubenswrapper[4840]: I0129 12:05:55.950178 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:55Z","lastTransitionTime":"2026-01-29T12:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.001830 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 02:39:01.355257606 +0000 UTC Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.056605 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.056649 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.056661 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.056678 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.056689 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.159760 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.159814 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.159824 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.159839 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.159850 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.263640 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.263725 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.263745 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.263777 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.263798 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.367058 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.367903 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.368001 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.368093 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.368120 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.427664 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.427740 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.427758 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.427787 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.427806 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: E0129 12:05:56.451982 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.457647 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.457737 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.457752 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.457772 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.457787 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: E0129 12:05:56.473692 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.478827 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.478905 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.478932 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.478994 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.479072 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: E0129 12:05:56.495376 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.501272 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.501336 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.501348 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.501364 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.501376 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: E0129 12:05:56.515121 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.519846 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.519934 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.519978 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.520005 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.520023 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: E0129 12:05:56.534910 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:56Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:56 crc kubenswrapper[4840]: E0129 12:05:56.535080 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.537272 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.537327 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.537338 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.537353 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.537363 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.639632 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.639678 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.639690 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.639710 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.639721 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.743136 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.743211 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.743226 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.743248 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.743267 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.846301 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.846367 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.846382 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.846406 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.846431 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.948733 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.948781 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.948794 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.948812 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:56 crc kubenswrapper[4840]: I0129 12:05:56.948825 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:56Z","lastTransitionTime":"2026-01-29T12:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.001197 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.001243 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.001280 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:57 crc kubenswrapper[4840]: E0129 12:05:57.001372 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.001197 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:57 crc kubenswrapper[4840]: E0129 12:05:57.001511 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:57 crc kubenswrapper[4840]: E0129 12:05:57.001605 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:57 crc kubenswrapper[4840]: E0129 12:05:57.001687 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.002334 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:17:15.265987334 +0000 UTC Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.051228 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.051280 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.051291 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.051310 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.051324 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:57Z","lastTransitionTime":"2026-01-29T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.153540 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.153580 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.153591 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.153607 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.153620 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:57Z","lastTransitionTime":"2026-01-29T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.256249 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.256283 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.256292 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.256307 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.256317 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:57Z","lastTransitionTime":"2026-01-29T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.358474 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.358528 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.358537 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.358551 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.358560 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:57Z","lastTransitionTime":"2026-01-29T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.460564 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.460611 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.460622 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.460638 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.460655 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:57Z","lastTransitionTime":"2026-01-29T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.562528 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.562567 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.562578 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.562607 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.562621 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:57Z","lastTransitionTime":"2026-01-29T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.665372 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.665409 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.665419 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.665432 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.665442 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:57Z","lastTransitionTime":"2026-01-29T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.768087 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.768124 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.768132 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.768145 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.768154 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:57Z","lastTransitionTime":"2026-01-29T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.870313 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.870358 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.870368 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.870385 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.870397 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:57Z","lastTransitionTime":"2026-01-29T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.973574 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.973626 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.973640 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.973662 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:57 crc kubenswrapper[4840]: I0129 12:05:57.973681 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:57Z","lastTransitionTime":"2026-01-29T12:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.003132 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 16:54:23.233143818 +0000 UTC Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.076163 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.076239 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.076249 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.076263 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.076272 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:58Z","lastTransitionTime":"2026-01-29T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.178925 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.178991 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.179004 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.179021 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.179032 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:58Z","lastTransitionTime":"2026-01-29T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.282400 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.282437 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.282447 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.282461 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.282470 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:58Z","lastTransitionTime":"2026-01-29T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.385105 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.385158 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.385172 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.385198 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.385223 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:58Z","lastTransitionTime":"2026-01-29T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.488807 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.488877 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.488895 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.488918 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.488932 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:58Z","lastTransitionTime":"2026-01-29T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.591038 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.591119 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.591141 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.591169 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.591188 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:58Z","lastTransitionTime":"2026-01-29T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.694106 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.694145 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.694154 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.694171 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.694181 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:58Z","lastTransitionTime":"2026-01-29T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.797680 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.797743 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.797762 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.797788 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.797807 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:58Z","lastTransitionTime":"2026-01-29T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.899544 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.899602 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.899618 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.899641 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:58 crc kubenswrapper[4840]: I0129 12:05:58.899658 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:58Z","lastTransitionTime":"2026-01-29T12:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.001218 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:05:59 crc kubenswrapper[4840]: E0129 12:05:59.001343 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.001381 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.001461 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:05:59 crc kubenswrapper[4840]: E0129 12:05:59.001515 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:05:59 crc kubenswrapper[4840]: E0129 12:05:59.001589 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.001621 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:05:59 crc kubenswrapper[4840]: E0129 12:05:59.001696 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.003294 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 10:43:35.139575765 +0000 UTC Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.003520 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.003579 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.003599 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.004124 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.004189 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:59Z","lastTransitionTime":"2026-01-29T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.019498 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a1aeb614f46d63fdcb344970622e42b76bad8d0a11350377ec0c5fa2cdfae65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.036175 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbaf604-6946-4bca-96af-be0e5fc92cf3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://696030f3ba6ba07c6e1ff67813cce855492ae98f0f78f463437dbaf3b9d044a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-b8dpv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-s2v8d\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.048754 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5d6b5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"aa9f54e4-ebb4-467b-92c9-16410e19fbd1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ad98d9f09dab9f7dd3ab734d59e58b0a1448408ac166f7b7c14e0105e9d99630\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:52Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hsvnr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5d6b5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.068972 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-vztt4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3a588a45-d664-486f-9135-b0184d00785a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://45b65b4689650f5b0d6ce4896eb35e4c6b98e3991e21b4d3b61532500d26718f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://605ffe8b08aaca8c4059e1dc5da7f4e45f117a2a3f9fad7c8a6011cd74ecd732\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://af35052c6801fe14887e34f4b2ac0112061c345dd1d73f9afa6882f970290a93\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ff70104b51b29ec19e2759499cb2214f39bef860146e30e04e9584163e999d8a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6640c33d251b5799935acc914374d473d7b97f7db8ec5daf1ac534498dc6bea5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4716df94d04ddd7d23eae2824f90ed2f7aef5974167088d89166a82562cee418\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1b876f504dd844ee2b08032ce256b006e5e48f9fc0320250ceeef67f1a7ef2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-x5zjd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-vztt4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.083391 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c69a828d-5ed4-45ac-95a4-f0cc698d6992\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4mp9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-mnzvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.094231 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-7smcd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8000c1f2-217e-480a-8f12-6eec342bafb1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e320075e981fbbce74e70f1585838f616271c607a497fd08af111cf359b88d3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7j8np\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:56Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-7smcd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.106074 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.106112 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.106120 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.106134 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.106145 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:59Z","lastTransitionTime":"2026-01-29T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.107616 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"09ae3fee-41c9-4a75-b331-b943c574073f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3c3fdc870a34030fb3fd9b89e1c74dfc04924f6dedf113617c923e415dfecb8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://398e31e8e7886ed29287e34e5c4b3f71e9893e756b2b13385cac5afec54902d3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d65866ffd068ab16a3f44c53a9474d5015ba93b32951605473c507efc5bb1a7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.124809 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"56e32df1-e061-4ac1-a318-f30f1b932753\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://139c25958324e3f49f3ef2b61f0779122b937c6ffbebb33cd4dd408367a72469\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cafbe8e32846faab26909f99cf3a15e6f463468d5f2059e192973f549bb0c85b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5a633ce55ec465156590cd31b8f4f646588898101a5fa52960276e00e63e73c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cb317212e86b78b6abf4dc184bc2d97ecf18a900c08bb10c0e05df061d1d393a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.139932 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.154317 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.168867 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://adb990c59acae7ee284097ff67b2bfb4e4e2e82b0fa43846059f366d52ce8297\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.183827 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-2zc5r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ff435f9ad41605de7d8db59d05550dd1a331288be6ba4826196a970d5af81a06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:43Z\\\",\\\"message\\\":\\\"2026-01-29T12:04:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b\\\\n2026-01-29T12:04:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_2e681442-3aa2-4a18-a2fa-54193d52ba9b to /host/opt/cni/bin/\\\\n2026-01-29T12:04:58Z [verbose] multus-daemon started\\\\n2026-01-29T12:04:58Z [verbose] Readiness Indicator file check\\\\n2026-01-29T12:05:43Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rnntk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-multus\"/\"multus-2zc5r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.195488 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8fa77e68-c6e2-4fc7-bff9-8b350895e913\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://78eeeb9095013ca622e21a9e9159648396aed37ac410de056edeaaf95b44c88c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd386c75a500590ad22e407d3420e68185c7f67638efeb08b74da19212790b7d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ft9tb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:05:03Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lj65r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.208487 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.208521 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.208529 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.208543 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.208553 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:59Z","lastTransitionTime":"2026-01-29T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.216431 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.231387 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.252089 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:51Z\\\",\\\"message\\\":\\\"github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:51.311268 6913 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:05:51.311571 6913 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:51.312418 6913 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0129 12:05:51.312579 6913 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:51.343002 6913 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 12:05:51.343053 6913 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 12:05:51.343150 6913 ovnkube.go:599] Stopped ovnkube\\\\nI0129 12:05:51.343193 6913 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:05:51.343327 6913 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.266650 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.279853 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:05:59Z is after 2025-08-24T17:21:41Z" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.310077 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.310122 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.310133 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.310148 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.310157 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:59Z","lastTransitionTime":"2026-01-29T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.412744 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.412831 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.412859 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.412893 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.412917 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:59Z","lastTransitionTime":"2026-01-29T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.514918 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.514989 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.515007 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.515029 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.515048 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:59Z","lastTransitionTime":"2026-01-29T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.617702 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.617751 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.617766 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.617785 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.617799 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:59Z","lastTransitionTime":"2026-01-29T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.720468 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.720514 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.720531 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.720552 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.720572 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:59Z","lastTransitionTime":"2026-01-29T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.823934 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.824015 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.824035 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.824057 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.824074 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:59Z","lastTransitionTime":"2026-01-29T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.926447 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.926494 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.926508 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.926526 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:05:59 crc kubenswrapper[4840]: I0129 12:05:59.926538 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:05:59Z","lastTransitionTime":"2026-01-29T12:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.004061 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 12:42:58.021016032 +0000 UTC Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.020660 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.029347 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.029413 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.029427 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.029450 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.029465 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:00Z","lastTransitionTime":"2026-01-29T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.131550 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.131647 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.131667 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.131700 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.131719 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:00Z","lastTransitionTime":"2026-01-29T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.234596 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.234630 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.234643 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.234660 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.234671 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:00Z","lastTransitionTime":"2026-01-29T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.336879 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.336908 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.336918 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.336936 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.336982 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:00Z","lastTransitionTime":"2026-01-29T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.438821 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.439075 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.439143 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.439208 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.439272 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:00Z","lastTransitionTime":"2026-01-29T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.542262 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.542297 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.542305 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.542322 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.542330 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:00Z","lastTransitionTime":"2026-01-29T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.644801 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.644834 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.644842 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.644856 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.644865 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:00Z","lastTransitionTime":"2026-01-29T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.748192 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.748222 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.748232 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.748247 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.748258 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:00Z","lastTransitionTime":"2026-01-29T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.850003 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.850257 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.850355 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.850444 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.850518 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:00Z","lastTransitionTime":"2026-01-29T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.952593 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.952983 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.953100 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.953224 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:00 crc kubenswrapper[4840]: I0129 12:06:00.953323 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:00Z","lastTransitionTime":"2026-01-29T12:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.000873 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:01 crc kubenswrapper[4840]: E0129 12:06:01.001034 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.001100 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.000882 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:01 crc kubenswrapper[4840]: E0129 12:06:01.001178 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:01 crc kubenswrapper[4840]: E0129 12:06:01.001203 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.001489 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:01 crc kubenswrapper[4840]: E0129 12:06:01.001654 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.004519 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 11:31:49.279188956 +0000 UTC Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.057063 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.057110 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.057123 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.057144 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.057158 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:01Z","lastTransitionTime":"2026-01-29T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.159894 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.159968 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.159982 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.160005 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.160023 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:01Z","lastTransitionTime":"2026-01-29T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.263503 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.263570 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.263584 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.263604 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.263623 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:01Z","lastTransitionTime":"2026-01-29T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.366926 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.367003 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.367013 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.367032 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.367042 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:01Z","lastTransitionTime":"2026-01-29T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.469843 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.469880 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.469891 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.469909 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.469922 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:01Z","lastTransitionTime":"2026-01-29T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.578387 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.578444 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.578455 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.578475 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.578488 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:01Z","lastTransitionTime":"2026-01-29T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.680880 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.680908 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.680917 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.680930 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.680937 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:01Z","lastTransitionTime":"2026-01-29T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.783464 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.783542 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.783551 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.783585 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.783595 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:01Z","lastTransitionTime":"2026-01-29T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.886321 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.886376 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.886388 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.886402 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.886411 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:01Z","lastTransitionTime":"2026-01-29T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.988610 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.988648 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.988656 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.988671 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:01 crc kubenswrapper[4840]: I0129 12:06:01.988681 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:01Z","lastTransitionTime":"2026-01-29T12:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.005056 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 19:47:20.559939305 +0000 UTC Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.091528 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.091566 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.091575 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.091591 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.091600 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:02Z","lastTransitionTime":"2026-01-29T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.198656 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.198712 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.198734 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.198754 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.198767 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:02Z","lastTransitionTime":"2026-01-29T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.303051 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.303113 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.303122 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.303139 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.303150 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:02Z","lastTransitionTime":"2026-01-29T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.406209 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.406256 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.406266 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.406283 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.406293 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:02Z","lastTransitionTime":"2026-01-29T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.508819 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.508883 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.508894 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.508915 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.508927 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:02Z","lastTransitionTime":"2026-01-29T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.611324 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.611369 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.611380 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.611397 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.611408 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:02Z","lastTransitionTime":"2026-01-29T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.714074 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.714117 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.714129 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.714144 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.714153 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:02Z","lastTransitionTime":"2026-01-29T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.816576 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.816614 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.816622 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.816635 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.816645 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:02Z","lastTransitionTime":"2026-01-29T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.919245 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.919318 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.919337 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.919357 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:02 crc kubenswrapper[4840]: I0129 12:06:02.919371 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:02Z","lastTransitionTime":"2026-01-29T12:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.000536 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:03 crc kubenswrapper[4840]: E0129 12:06:03.000751 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.000980 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.001010 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.001175 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:03 crc kubenswrapper[4840]: E0129 12:06:03.001298 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:03 crc kubenswrapper[4840]: E0129 12:06:03.001469 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:03 crc kubenswrapper[4840]: E0129 12:06:03.001559 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.005164 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 22:26:48.237816994 +0000 UTC Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.021683 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.021711 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.021720 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.021734 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.021744 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:03Z","lastTransitionTime":"2026-01-29T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.124451 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.124514 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.124529 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.124552 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.124574 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:03Z","lastTransitionTime":"2026-01-29T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.226756 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.226788 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.226798 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.226814 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.226825 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:03Z","lastTransitionTime":"2026-01-29T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.328937 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.328986 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.328994 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.329008 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.329016 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:03Z","lastTransitionTime":"2026-01-29T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.432025 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.432068 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.432080 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.432097 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.432119 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:03Z","lastTransitionTime":"2026-01-29T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.534246 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.534321 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.534339 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.534357 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.534422 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:03Z","lastTransitionTime":"2026-01-29T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.637516 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.637596 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.637620 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.637650 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.637671 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:03Z","lastTransitionTime":"2026-01-29T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.740014 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.740071 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.740081 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.740094 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.740102 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:03Z","lastTransitionTime":"2026-01-29T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.841969 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.842004 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.842013 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.842027 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.842035 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:03Z","lastTransitionTime":"2026-01-29T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.944628 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.944707 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.944732 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.944766 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:03 crc kubenswrapper[4840]: I0129 12:06:03.944788 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:03Z","lastTransitionTime":"2026-01-29T12:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.006266 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 12:05:10.148931615 +0000 UTC Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.047238 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.047302 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.047312 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.047325 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.047334 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:04Z","lastTransitionTime":"2026-01-29T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.149223 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.149269 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.149284 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.149300 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.149312 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:04Z","lastTransitionTime":"2026-01-29T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.251316 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.251367 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.251378 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.251394 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.251407 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:04Z","lastTransitionTime":"2026-01-29T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.353730 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.353767 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.353776 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.353789 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.353800 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:04Z","lastTransitionTime":"2026-01-29T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.456137 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.456173 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.456183 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.456197 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.456207 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:04Z","lastTransitionTime":"2026-01-29T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.558535 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.558583 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.558593 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.558608 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.558617 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:04Z","lastTransitionTime":"2026-01-29T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.661776 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.661813 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.661821 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.661837 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.661871 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:04Z","lastTransitionTime":"2026-01-29T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.763653 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.763687 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.763698 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.763717 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.763729 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:04Z","lastTransitionTime":"2026-01-29T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.866258 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.866558 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.866696 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.866795 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.866902 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:04Z","lastTransitionTime":"2026-01-29T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.969509 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.969860 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.969961 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.970048 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:04 crc kubenswrapper[4840]: I0129 12:06:04.970121 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:04Z","lastTransitionTime":"2026-01-29T12:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.000876 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.000988 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.001042 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.000896 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:05 crc kubenswrapper[4840]: E0129 12:06:05.001075 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:05 crc kubenswrapper[4840]: E0129 12:06:05.001152 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:05 crc kubenswrapper[4840]: E0129 12:06:05.001298 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.007094 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:29:07.565824622 +0000 UTC Jan 29 12:06:05 crc kubenswrapper[4840]: E0129 12:06:05.002237 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.071777 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.071812 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.071820 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.071834 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.071844 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:05Z","lastTransitionTime":"2026-01-29T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.174018 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.174059 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.174072 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.174088 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.174097 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:05Z","lastTransitionTime":"2026-01-29T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.276276 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.276318 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.276332 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.276349 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.276361 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:05Z","lastTransitionTime":"2026-01-29T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.378738 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.378769 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.378778 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.378790 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.378800 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:05Z","lastTransitionTime":"2026-01-29T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.480728 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.480785 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.480795 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.480828 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.480839 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:05Z","lastTransitionTime":"2026-01-29T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.583341 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.583384 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.583393 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.583407 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.583416 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:05Z","lastTransitionTime":"2026-01-29T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.686236 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.686299 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.686310 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.686325 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.686336 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:05Z","lastTransitionTime":"2026-01-29T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.788984 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.789026 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.789035 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.789050 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.789059 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:05Z","lastTransitionTime":"2026-01-29T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.892314 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.892354 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.892363 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.892378 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.892387 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:05Z","lastTransitionTime":"2026-01-29T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.996073 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.996435 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.996460 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.996492 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:05 crc kubenswrapper[4840]: I0129 12:06:05.996519 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:05Z","lastTransitionTime":"2026-01-29T12:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.007266 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 00:48:29.741093113 +0000 UTC Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.099493 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.099574 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.099590 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.099613 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.099626 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.202400 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.202736 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.202835 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.202963 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.203050 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.306430 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.306468 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.306480 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.306501 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.306514 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.408871 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.408912 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.408925 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.408942 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.408986 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.511607 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.511662 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.511678 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.511699 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.511714 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.614285 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.614394 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.614407 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.614424 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.614437 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.717589 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.717622 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.717633 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.717650 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.717661 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.719726 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.719795 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.719808 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.719820 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.719828 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: E0129 12:06:06.734734 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.739345 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.739394 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.739407 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.739427 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.739439 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: E0129 12:06:06.753990 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.757700 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.757907 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.758040 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.758188 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.758290 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: E0129 12:06:06.772595 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.776987 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.777032 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.777044 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.777063 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.777074 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: E0129 12:06:06.789414 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.794865 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.794923 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.794938 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.794983 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.794996 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: E0129 12:06:06.809886 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404552Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865352Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-29T12:06:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f6451459-d462-4b03-b6c7-2b939b62ff4d\\\",\\\"systemUUID\\\":\\\"021759b1-2f25-49e1-8fe5-59c6e27efb1d\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:06Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:06 crc kubenswrapper[4840]: E0129 12:06:06.810096 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.819888 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.820120 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.820140 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.820156 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.820167 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.922770 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.922804 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.922815 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.922830 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:06 crc kubenswrapper[4840]: I0129 12:06:06.922842 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:06Z","lastTransitionTime":"2026-01-29T12:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.001119 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.001169 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.001128 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.001220 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:07 crc kubenswrapper[4840]: E0129 12:06:07.001555 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:07 crc kubenswrapper[4840]: E0129 12:06:07.001794 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:07 crc kubenswrapper[4840]: E0129 12:06:07.001746 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:07 crc kubenswrapper[4840]: E0129 12:06:07.001885 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.007530 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 04:55:32.201168929 +0000 UTC Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.025466 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.025506 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.025514 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.025528 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.025556 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:07Z","lastTransitionTime":"2026-01-29T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.127185 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.127231 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.127243 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.127263 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.127276 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:07Z","lastTransitionTime":"2026-01-29T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.229732 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.229772 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.229783 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.229800 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.229813 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:07Z","lastTransitionTime":"2026-01-29T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.332233 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.332264 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.332272 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.332290 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.332299 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:07Z","lastTransitionTime":"2026-01-29T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.434850 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.434887 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.434895 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.434909 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.434919 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:07Z","lastTransitionTime":"2026-01-29T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.537160 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.537207 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.537216 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.537232 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.537242 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:07Z","lastTransitionTime":"2026-01-29T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.638890 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.639177 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.639250 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.639323 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.639393 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:07Z","lastTransitionTime":"2026-01-29T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.742021 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.742064 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.742076 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.742091 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.742104 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:07Z","lastTransitionTime":"2026-01-29T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.844918 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.845189 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.845277 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.845368 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.845436 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:07Z","lastTransitionTime":"2026-01-29T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.947906 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.947979 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.947989 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.948004 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:07 crc kubenswrapper[4840]: I0129 12:06:07.948014 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:07Z","lastTransitionTime":"2026-01-29T12:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.001087 4840 scope.go:117] "RemoveContainer" containerID="8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548" Jan 29 12:06:08 crc kubenswrapper[4840]: E0129 12:06:08.001251 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.010190 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 16:20:18.297833791 +0000 UTC Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.050322 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.050382 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.050396 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.050415 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.050449 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:08Z","lastTransitionTime":"2026-01-29T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.153108 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.153144 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.153154 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.153168 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.153179 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:08Z","lastTransitionTime":"2026-01-29T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.256140 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.256507 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.256622 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.256702 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.256761 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:08Z","lastTransitionTime":"2026-01-29T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.358668 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.359072 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.359274 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.359445 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.359592 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:08Z","lastTransitionTime":"2026-01-29T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.462097 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.462128 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.462139 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.462152 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.462161 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:08Z","lastTransitionTime":"2026-01-29T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.564818 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.565180 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.565278 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.565382 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.565475 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:08Z","lastTransitionTime":"2026-01-29T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.667755 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.667803 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.667818 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.667835 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.667847 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:08Z","lastTransitionTime":"2026-01-29T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.770007 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.770252 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.770317 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.770388 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.770450 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:08Z","lastTransitionTime":"2026-01-29T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.872734 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.872775 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.872789 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.872803 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.872813 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:08Z","lastTransitionTime":"2026-01-29T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.975618 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.975659 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.975668 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.975682 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:08 crc kubenswrapper[4840]: I0129 12:06:08.975693 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:08Z","lastTransitionTime":"2026-01-29T12:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.000323 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.000364 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:09 crc kubenswrapper[4840]: E0129 12:06:09.000511 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.000541 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:09 crc kubenswrapper[4840]: E0129 12:06:09.001070 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.001130 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:09 crc kubenswrapper[4840]: E0129 12:06:09.001909 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:09 crc kubenswrapper[4840]: E0129 12:06:09.002641 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.011418 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:07:45.701344289 +0000 UTC Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.022293 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"54c1b179-33d3-4fad-ab22-4a07507f8bc1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://583e58f5264e2a48eb8eaf02087ee8b6e0bd70766b581b12c73cc64c4d5274fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3969fd732cb2a3cf329ba7663b986097007917ddffae27cdc7e376b190ecf9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d52f7bbf3edceed3811703fe84aff6d7b35699436c8452034c75c3acc62f31d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c8cb1521cb88cd007ad51d8f0ec93b77adc48f1e4cd8cc0522d15d26f2847bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3f5dee30f6d6dcd355c127e0ccaed8d951196478c1a65f25fef8abde4c9df566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e72a2519b108aeaef26aa0a353f5b09332326bfa05ad2e5a3a1cc23cc1ca461\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8597ab4ff97124fb557bd40132c314e63223ba98df4ea2a4fe1402afcc4169c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da61f9f95f8d316965eeebdd28ed080f7721709f97a122dc33ff3b2ca6cc735d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.044345 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0ad1298-8886-4cc3-892d-7574685f0c3c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:05:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-29T12:04:48Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0129 12:04:42.421348 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0129 12:04:42.430642 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3929218116/tls.crt::/tmp/serving-cert-3929218116/tls.key\\\\\\\"\\\\nI0129 12:04:48.552728 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0129 12:04:48.557147 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0129 12:04:48.557177 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0129 12:04:48.557199 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0129 12:04:48.557207 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0129 12:04:48.567506 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0129 12:04:48.567543 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567553 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0129 12:04:48.567563 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nI0129 12:04:48.567553 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0129 12:04:48.567571 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0129 12:04:48.567594 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0129 12:04:48.567607 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0129 12:04:48.569717 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:29Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.062965 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b331ae03-7000-435b-8cb4-65da0c67d876\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:51Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-29T12:05:51Z\\\",\\\"message\\\":\\\"github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:51.311268 6913 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0129 12:05:51.311571 6913 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0129 12:05:51.312418 6913 reflector.go:311] Stopping reflector *v1.NetworkAttachmentDefinition (0s) from github.com/k8snetworkplumbingwg/network-attachment-definition-client/pkg/client/informers/externalversions/factory.go:117\\\\nI0129 12:05:51.312579 6913 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0129 12:05:51.343002 6913 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0129 12:05:51.343053 6913 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0129 12:05:51.343150 6913 ovnkube.go:599] Stopped ovnkube\\\\nI0129 12:05:51.343193 6913 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0129 12:05:51.343327 6913 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-29T12:05:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-29T12:04:54Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-29T12:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n4wbn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-29T12:04:51Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-vl4fj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.076737 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:50Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://939858e1ecf5e98caa0900cd4364013d786e70c41da6c2c98dc4f1ea189b4324\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-29T12:04:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.078340 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.078477 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.078661 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.078783 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.078895 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:09Z","lastTransitionTime":"2026-01-29T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.090446 4840 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-29T12:04:46Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-29T12:06:09Z is after 2025-08-24T17:21:41Z" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.126366 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podStartSLOduration=79.126347302 podStartE2EDuration="1m19.126347302s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:09.126178686 +0000 UTC m=+100.789158599" watchObservedRunningTime="2026-01-29 12:06:09.126347302 +0000 UTC m=+100.789327195" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.153644 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5d6b5" podStartSLOduration=79.153628455 podStartE2EDuration="1m19.153628455s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:09.138333085 +0000 UTC m=+100.801312978" watchObservedRunningTime="2026-01-29 12:06:09.153628455 +0000 UTC m=+100.816608348" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.153770 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-vztt4" podStartSLOduration=78.153764619 podStartE2EDuration="1m18.153764619s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:09.153565563 +0000 UTC m=+100.816545456" watchObservedRunningTime="2026-01-29 12:06:09.153764619 +0000 UTC m=+100.816744512" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.179783 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-2zc5r" podStartSLOduration=78.179766305 podStartE2EDuration="1m18.179766305s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:09.179520917 +0000 UTC m=+100.842500820" watchObservedRunningTime="2026-01-29 12:06:09.179766305 +0000 UTC m=+100.842746198" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.181025 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.181054 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.181067 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.181083 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.181094 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:09Z","lastTransitionTime":"2026-01-29T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.192092 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-7smcd" podStartSLOduration=79.192070827 podStartE2EDuration="1m19.192070827s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:09.191847231 +0000 UTC m=+100.854827144" watchObservedRunningTime="2026-01-29 12:06:09.192070827 +0000 UTC m=+100.855050720" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.203587 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=9.203564376 podStartE2EDuration="9.203564376s" podCreationTimestamp="2026-01-29 12:06:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:09.203561736 +0000 UTC m=+100.866541629" watchObservedRunningTime="2026-01-29 12:06:09.203564376 +0000 UTC m=+100.866544269" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.219845 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=76.219825155 podStartE2EDuration="1m16.219825155s" podCreationTimestamp="2026-01-29 12:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:09.21965333 +0000 UTC m=+100.882633253" watchObservedRunningTime="2026-01-29 12:06:09.219825155 +0000 UTC m=+100.882805048" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.233140 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=45.233121777 podStartE2EDuration="45.233121777s" podCreationTimestamp="2026-01-29 12:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:09.232634282 +0000 UTC m=+100.895614175" watchObservedRunningTime="2026-01-29 12:06:09.233121777 +0000 UTC m=+100.896101670" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.282972 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.283016 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.283026 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.283046 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.283057 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:09Z","lastTransitionTime":"2026-01-29T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.286801 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lj65r" podStartSLOduration=78.286788077 podStartE2EDuration="1m18.286788077s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:09.286582801 +0000 UTC m=+100.949562694" watchObservedRunningTime="2026-01-29 12:06:09.286788077 +0000 UTC m=+100.949767970" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.298164 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:09 crc kubenswrapper[4840]: E0129 12:06:09.298290 4840 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:09 crc kubenswrapper[4840]: E0129 12:06:09.298349 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs podName:c69a828d-5ed4-45ac-95a4-f0cc698d6992 nodeName:}" failed. No retries permitted until 2026-01-29 12:07:13.298334197 +0000 UTC m=+164.961314090 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs") pod "network-metrics-daemon-mnzvc" (UID: "c69a828d-5ed4-45ac-95a4-f0cc698d6992") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.385075 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.385125 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.385137 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.385155 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.385167 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:09Z","lastTransitionTime":"2026-01-29T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.487441 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.487489 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.487502 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.487521 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.487536 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:09Z","lastTransitionTime":"2026-01-29T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.590262 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.590584 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.590709 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.590838 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.590995 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:09Z","lastTransitionTime":"2026-01-29T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.694399 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.694426 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.694434 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.694446 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.694454 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:09Z","lastTransitionTime":"2026-01-29T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.796789 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.796829 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.796839 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.796855 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.796867 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:09Z","lastTransitionTime":"2026-01-29T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.899553 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.899864 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.900057 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.900227 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:09 crc kubenswrapper[4840]: I0129 12:06:09.900362 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:09Z","lastTransitionTime":"2026-01-29T12:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.003070 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.003120 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.003135 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.003158 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.003175 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:10Z","lastTransitionTime":"2026-01-29T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.012314 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 12:03:14.141375062 +0000 UTC Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.105976 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.106024 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.106036 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.106055 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.106071 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:10Z","lastTransitionTime":"2026-01-29T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.208800 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.208838 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.208849 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.208866 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.208878 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:10Z","lastTransitionTime":"2026-01-29T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.311225 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.311507 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.311565 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.311582 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.311593 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:10Z","lastTransitionTime":"2026-01-29T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.414494 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.414547 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.414556 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.414572 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.414581 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:10Z","lastTransitionTime":"2026-01-29T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.517765 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.517801 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.517808 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.517823 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.517832 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:10Z","lastTransitionTime":"2026-01-29T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.620290 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.620341 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.620349 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.620364 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.620375 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:10Z","lastTransitionTime":"2026-01-29T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.723089 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.723132 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.723144 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.723160 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.723171 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:10Z","lastTransitionTime":"2026-01-29T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.825165 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.825476 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.825493 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.825517 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.825535 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:10Z","lastTransitionTime":"2026-01-29T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.927398 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.927485 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.927499 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.927517 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:10 crc kubenswrapper[4840]: I0129 12:06:10.927528 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:10Z","lastTransitionTime":"2026-01-29T12:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.001265 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:11 crc kubenswrapper[4840]: E0129 12:06:11.001416 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.001278 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.001477 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:11 crc kubenswrapper[4840]: E0129 12:06:11.001533 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:11 crc kubenswrapper[4840]: E0129 12:06:11.001611 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.001603 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:11 crc kubenswrapper[4840]: E0129 12:06:11.001786 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.013124 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 14:21:10.50483572 +0000 UTC Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.029717 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.029754 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.029765 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.029779 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.029788 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:11Z","lastTransitionTime":"2026-01-29T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.132335 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.132369 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.132378 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.132390 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.132399 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:11Z","lastTransitionTime":"2026-01-29T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.234489 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.234529 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.234542 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.234559 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.234569 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:11Z","lastTransitionTime":"2026-01-29T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.337379 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.337424 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.337432 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.337446 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.337456 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:11Z","lastTransitionTime":"2026-01-29T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.440203 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.440240 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.440248 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.440261 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.440269 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:11Z","lastTransitionTime":"2026-01-29T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.542587 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.542647 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.542656 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.542670 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.542681 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:11Z","lastTransitionTime":"2026-01-29T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.645470 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.645517 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.645529 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.645545 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.645555 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:11Z","lastTransitionTime":"2026-01-29T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.747960 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.748005 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.748015 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.748029 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.748038 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:11Z","lastTransitionTime":"2026-01-29T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.849444 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.849481 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.849492 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.849508 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.849519 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:11Z","lastTransitionTime":"2026-01-29T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.952764 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.952819 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.952838 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.952861 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:11 crc kubenswrapper[4840]: I0129 12:06:11.952877 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:11Z","lastTransitionTime":"2026-01-29T12:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.014169 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 02:11:57.151686601 +0000 UTC Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.055637 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.055669 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.055677 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.055693 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.055709 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:12Z","lastTransitionTime":"2026-01-29T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.158422 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.158486 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.158503 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.158525 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.158542 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:12Z","lastTransitionTime":"2026-01-29T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.261093 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.261127 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.261138 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.261153 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.261164 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:12Z","lastTransitionTime":"2026-01-29T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.363722 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.363759 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.363768 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.363784 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.363796 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:12Z","lastTransitionTime":"2026-01-29T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.466406 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.466458 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.466467 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.466482 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.466491 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:12Z","lastTransitionTime":"2026-01-29T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.568990 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.569025 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.569033 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.569047 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.569056 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:12Z","lastTransitionTime":"2026-01-29T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.671021 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.671060 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.671068 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.671098 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.671107 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:12Z","lastTransitionTime":"2026-01-29T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.773523 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.773568 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.773578 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.773595 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.773606 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:12Z","lastTransitionTime":"2026-01-29T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.875465 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.875505 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.875513 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.875530 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:12 crc kubenswrapper[4840]: I0129 12:06:12.875539 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:12Z","lastTransitionTime":"2026-01-29T12:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.002503 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.002559 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.002584 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:13 crc kubenswrapper[4840]: E0129 12:06:13.002640 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:13 crc kubenswrapper[4840]: E0129 12:06:13.002738 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.002764 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:13 crc kubenswrapper[4840]: E0129 12:06:13.002878 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:13 crc kubenswrapper[4840]: E0129 12:06:13.002994 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.004334 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.004371 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.004383 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.004402 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.004415 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:13Z","lastTransitionTime":"2026-01-29T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.015033 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 14:25:34.588388447 +0000 UTC Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.106310 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.106360 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.106373 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.106393 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.106409 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:13Z","lastTransitionTime":"2026-01-29T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.208954 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.209028 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.209045 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.209061 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.209073 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:13Z","lastTransitionTime":"2026-01-29T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.311154 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.311199 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.311212 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.311228 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.311239 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:13Z","lastTransitionTime":"2026-01-29T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.413811 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.413857 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.413869 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.413884 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.413896 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:13Z","lastTransitionTime":"2026-01-29T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.516739 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.516782 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.516793 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.516808 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.516820 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:13Z","lastTransitionTime":"2026-01-29T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.619606 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.619661 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.619677 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.619698 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.619711 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:13Z","lastTransitionTime":"2026-01-29T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.721937 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.721991 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.722004 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.722020 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.722032 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:13Z","lastTransitionTime":"2026-01-29T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.824455 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.824503 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.824513 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.824530 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.824540 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:13Z","lastTransitionTime":"2026-01-29T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.926517 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.926569 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.926579 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.926599 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:13 crc kubenswrapper[4840]: I0129 12:06:13.926611 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:13Z","lastTransitionTime":"2026-01-29T12:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.015970 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 18:42:55.255128753 +0000 UTC Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.029307 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.029410 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.029477 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.029499 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.029511 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:14Z","lastTransitionTime":"2026-01-29T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.131038 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.131075 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.131086 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.131102 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.131114 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:14Z","lastTransitionTime":"2026-01-29T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.233003 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.233064 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.233077 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.233092 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.233100 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:14Z","lastTransitionTime":"2026-01-29T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.336018 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.336063 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.336074 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.336090 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.336104 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:14Z","lastTransitionTime":"2026-01-29T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.438288 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.438334 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.438345 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.438362 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.438373 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:14Z","lastTransitionTime":"2026-01-29T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.541077 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.541120 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.541133 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.541149 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.541165 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:14Z","lastTransitionTime":"2026-01-29T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.644065 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.644100 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.644108 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.644122 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.644131 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:14Z","lastTransitionTime":"2026-01-29T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.746290 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.746324 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.746335 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.746351 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.746362 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:14Z","lastTransitionTime":"2026-01-29T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.849017 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.849074 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.849083 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.849098 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.849108 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:14Z","lastTransitionTime":"2026-01-29T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.951343 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.951385 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.951397 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.951417 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:14 crc kubenswrapper[4840]: I0129 12:06:14.951429 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:14Z","lastTransitionTime":"2026-01-29T12:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.000909 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.001015 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.001048 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:15 crc kubenswrapper[4840]: E0129 12:06:15.001111 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.001128 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:15 crc kubenswrapper[4840]: E0129 12:06:15.001209 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:15 crc kubenswrapper[4840]: E0129 12:06:15.001277 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:15 crc kubenswrapper[4840]: E0129 12:06:15.001366 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.016046 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 18:55:56.390462233 +0000 UTC Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.053600 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.053646 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.053657 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.053672 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.053682 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:15Z","lastTransitionTime":"2026-01-29T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.183166 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.183209 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.183221 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.183239 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.183250 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:15Z","lastTransitionTime":"2026-01-29T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.285882 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.285988 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.286011 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.286037 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.286055 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:15Z","lastTransitionTime":"2026-01-29T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.387979 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.388026 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.388039 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.388056 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.388066 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:15Z","lastTransitionTime":"2026-01-29T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.490158 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.490195 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.490207 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.490222 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.490234 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:15Z","lastTransitionTime":"2026-01-29T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.592661 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.593205 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.593365 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.593505 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.593672 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:15Z","lastTransitionTime":"2026-01-29T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.697130 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.697205 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.697214 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.697248 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.697258 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:15Z","lastTransitionTime":"2026-01-29T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.800352 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.800422 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.800433 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.800456 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.800471 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:15Z","lastTransitionTime":"2026-01-29T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.902661 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.902708 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.902716 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.902734 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:15 crc kubenswrapper[4840]: I0129 12:06:15.902748 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:15Z","lastTransitionTime":"2026-01-29T12:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.005935 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.006033 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.006042 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.006056 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.006066 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:16Z","lastTransitionTime":"2026-01-29T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.017045 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 06:14:26.983510501 +0000 UTC Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.108230 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.108285 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.108298 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.108322 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.108339 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:16Z","lastTransitionTime":"2026-01-29T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.211342 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.211385 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.211400 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.211416 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.211428 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:16Z","lastTransitionTime":"2026-01-29T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.315004 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.315087 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.315110 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.315144 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.315169 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:16Z","lastTransitionTime":"2026-01-29T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.418119 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.418194 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.418214 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.418245 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.418265 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:16Z","lastTransitionTime":"2026-01-29T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.521282 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.521329 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.521340 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.521354 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.521363 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:16Z","lastTransitionTime":"2026-01-29T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.623724 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.623767 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.623776 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.623791 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.623801 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:16Z","lastTransitionTime":"2026-01-29T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.726653 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.726738 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.726759 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.726791 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.726815 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:16Z","lastTransitionTime":"2026-01-29T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.829560 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.829648 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.829662 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.829691 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.829708 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:16Z","lastTransitionTime":"2026-01-29T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.932497 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.932549 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.932562 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.932584 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:16 crc kubenswrapper[4840]: I0129 12:06:16.932596 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:16Z","lastTransitionTime":"2026-01-29T12:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.000832 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.000876 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.000925 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.000874 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:17 crc kubenswrapper[4840]: E0129 12:06:17.001084 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:17 crc kubenswrapper[4840]: E0129 12:06:17.001182 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:17 crc kubenswrapper[4840]: E0129 12:06:17.001262 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:17 crc kubenswrapper[4840]: E0129 12:06:17.001416 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.018223 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 04:43:14.348696735 +0000 UTC Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.026866 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.026926 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.026977 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.027008 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.027030 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:17Z","lastTransitionTime":"2026-01-29T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.045855 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.045916 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.045935 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.045991 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.046009 4840 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-29T12:06:17Z","lastTransitionTime":"2026-01-29T12:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.072450 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68"] Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.072802 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.075327 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.075773 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.076593 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.078418 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.099845 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.099891 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.099927 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.099975 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.100078 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.131494 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=91.131473188 podStartE2EDuration="1m31.131473188s" podCreationTimestamp="2026-01-29 12:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:17.126937935 +0000 UTC m=+108.789917828" watchObservedRunningTime="2026-01-29 12:06:17.131473188 +0000 UTC m=+108.794453091" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.152791 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=89.152770745 podStartE2EDuration="1m29.152770745s" podCreationTimestamp="2026-01-29 12:04:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:17.151179809 +0000 UTC m=+108.814159712" watchObservedRunningTime="2026-01-29 12:06:17.152770745 +0000 UTC m=+108.815750648" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.200704 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.200752 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.200788 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.200818 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.200832 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.201265 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.201265 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.202222 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.215888 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.224686 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/121b3d32-e61b-4f83-aee5-00d2ccfc20a3-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8bg68\" (UID: \"121b3d32-e61b-4f83-aee5-00d2ccfc20a3\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.385101 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.656151 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" event={"ID":"121b3d32-e61b-4f83-aee5-00d2ccfc20a3","Type":"ContainerStarted","Data":"86e1ac5c3e7de5b7d137471b6113a1fe7fb88ae7d04759a70776aeaf8e98df07"} Jan 29 12:06:17 crc kubenswrapper[4840]: I0129 12:06:17.656500 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" event={"ID":"121b3d32-e61b-4f83-aee5-00d2ccfc20a3","Type":"ContainerStarted","Data":"4bd2e7d82d1033dc59b9437150c95ee19a3d522030b4b1fbc83a184610a991a8"} Jan 29 12:06:18 crc kubenswrapper[4840]: I0129 12:06:18.018604 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:56:08.417699714 +0000 UTC Jan 29 12:06:18 crc kubenswrapper[4840]: I0129 12:06:18.018700 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 29 12:06:18 crc kubenswrapper[4840]: I0129 12:06:18.029292 4840 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 29 12:06:19 crc kubenswrapper[4840]: I0129 12:06:19.000569 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:19 crc kubenswrapper[4840]: I0129 12:06:19.000569 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:19 crc kubenswrapper[4840]: I0129 12:06:19.000673 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:19 crc kubenswrapper[4840]: E0129 12:06:19.006515 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:19 crc kubenswrapper[4840]: I0129 12:06:19.006704 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:19 crc kubenswrapper[4840]: E0129 12:06:19.006860 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:19 crc kubenswrapper[4840]: I0129 12:06:19.008095 4840 scope.go:117] "RemoveContainer" containerID="8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548" Jan 29 12:06:19 crc kubenswrapper[4840]: E0129 12:06:19.008792 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-vl4fj_openshift-ovn-kubernetes(b331ae03-7000-435b-8cb4-65da0c67d876)\"" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" Jan 29 12:06:19 crc kubenswrapper[4840]: E0129 12:06:19.008793 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:19 crc kubenswrapper[4840]: E0129 12:06:19.008957 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:21 crc kubenswrapper[4840]: I0129 12:06:21.001261 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:21 crc kubenswrapper[4840]: I0129 12:06:21.001333 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:21 crc kubenswrapper[4840]: I0129 12:06:21.001399 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:21 crc kubenswrapper[4840]: E0129 12:06:21.001428 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:21 crc kubenswrapper[4840]: I0129 12:06:21.001583 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:21 crc kubenswrapper[4840]: E0129 12:06:21.001651 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:21 crc kubenswrapper[4840]: E0129 12:06:21.001603 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:21 crc kubenswrapper[4840]: E0129 12:06:21.001887 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:23 crc kubenswrapper[4840]: I0129 12:06:23.000543 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:23 crc kubenswrapper[4840]: I0129 12:06:23.000543 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:23 crc kubenswrapper[4840]: I0129 12:06:23.001174 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:23 crc kubenswrapper[4840]: E0129 12:06:23.001301 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:23 crc kubenswrapper[4840]: I0129 12:06:23.001323 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:23 crc kubenswrapper[4840]: E0129 12:06:23.001397 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:23 crc kubenswrapper[4840]: E0129 12:06:23.001495 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:23 crc kubenswrapper[4840]: E0129 12:06:23.001564 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:25 crc kubenswrapper[4840]: I0129 12:06:25.000722 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:25 crc kubenswrapper[4840]: I0129 12:06:25.000847 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:25 crc kubenswrapper[4840]: I0129 12:06:25.000740 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:25 crc kubenswrapper[4840]: E0129 12:06:25.000871 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:25 crc kubenswrapper[4840]: I0129 12:06:25.001023 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:25 crc kubenswrapper[4840]: E0129 12:06:25.001032 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:25 crc kubenswrapper[4840]: E0129 12:06:25.001105 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:25 crc kubenswrapper[4840]: E0129 12:06:25.001233 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:27 crc kubenswrapper[4840]: I0129 12:06:27.000616 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:27 crc kubenswrapper[4840]: I0129 12:06:27.000674 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:27 crc kubenswrapper[4840]: I0129 12:06:27.000691 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:27 crc kubenswrapper[4840]: I0129 12:06:27.000638 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:27 crc kubenswrapper[4840]: E0129 12:06:27.000929 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:27 crc kubenswrapper[4840]: E0129 12:06:27.001056 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:27 crc kubenswrapper[4840]: E0129 12:06:27.001145 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:27 crc kubenswrapper[4840]: E0129 12:06:27.001256 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:28 crc kubenswrapper[4840]: E0129 12:06:28.993743 4840 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 29 12:06:29 crc kubenswrapper[4840]: I0129 12:06:29.000784 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:29 crc kubenswrapper[4840]: I0129 12:06:29.000801 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:29 crc kubenswrapper[4840]: E0129 12:06:29.002239 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:29 crc kubenswrapper[4840]: I0129 12:06:29.002314 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:29 crc kubenswrapper[4840]: E0129 12:06:29.002404 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:29 crc kubenswrapper[4840]: E0129 12:06:29.002528 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:29 crc kubenswrapper[4840]: I0129 12:06:29.002559 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:29 crc kubenswrapper[4840]: E0129 12:06:29.003082 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:29 crc kubenswrapper[4840]: E0129 12:06:29.130238 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 12:06:29 crc kubenswrapper[4840]: I0129 12:06:29.693571 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zc5r_d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969/kube-multus/1.log" Jan 29 12:06:29 crc kubenswrapper[4840]: I0129 12:06:29.694194 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zc5r_d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969/kube-multus/0.log" Jan 29 12:06:29 crc kubenswrapper[4840]: I0129 12:06:29.694265 4840 generic.go:334] "Generic (PLEG): container finished" podID="d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969" containerID="ff435f9ad41605de7d8db59d05550dd1a331288be6ba4826196a970d5af81a06" exitCode=1 Jan 29 12:06:29 crc kubenswrapper[4840]: I0129 12:06:29.694311 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zc5r" event={"ID":"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969","Type":"ContainerDied","Data":"ff435f9ad41605de7d8db59d05550dd1a331288be6ba4826196a970d5af81a06"} Jan 29 12:06:29 crc kubenswrapper[4840]: I0129 12:06:29.694361 4840 scope.go:117] "RemoveContainer" containerID="60b9f2c3087a52cd4f8d7894f1b599d35a3d8b940280d04e98898e78fc824f62" Jan 29 12:06:29 crc kubenswrapper[4840]: I0129 12:06:29.695297 4840 scope.go:117] "RemoveContainer" containerID="ff435f9ad41605de7d8db59d05550dd1a331288be6ba4826196a970d5af81a06" Jan 29 12:06:29 crc kubenswrapper[4840]: E0129 12:06:29.695708 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-2zc5r_openshift-multus(d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969)\"" pod="openshift-multus/multus-2zc5r" podUID="d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969" Jan 29 12:06:29 crc kubenswrapper[4840]: I0129 12:06:29.716297 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8bg68" podStartSLOduration=99.716258415 podStartE2EDuration="1m39.716258415s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:17.671792942 +0000 UTC m=+109.334772845" watchObservedRunningTime="2026-01-29 12:06:29.716258415 +0000 UTC m=+121.379238388" Jan 29 12:06:30 crc kubenswrapper[4840]: I0129 12:06:30.701498 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zc5r_d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969/kube-multus/1.log" Jan 29 12:06:31 crc kubenswrapper[4840]: I0129 12:06:31.000430 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:31 crc kubenswrapper[4840]: I0129 12:06:31.000425 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:31 crc kubenswrapper[4840]: E0129 12:06:31.000654 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:31 crc kubenswrapper[4840]: I0129 12:06:31.000579 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:31 crc kubenswrapper[4840]: I0129 12:06:31.000454 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:31 crc kubenswrapper[4840]: E0129 12:06:31.000847 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:31 crc kubenswrapper[4840]: E0129 12:06:31.000898 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:31 crc kubenswrapper[4840]: E0129 12:06:31.001017 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:33 crc kubenswrapper[4840]: I0129 12:06:33.001291 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:33 crc kubenswrapper[4840]: I0129 12:06:33.001423 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:33 crc kubenswrapper[4840]: I0129 12:06:33.001913 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:33 crc kubenswrapper[4840]: E0129 12:06:33.002168 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:33 crc kubenswrapper[4840]: I0129 12:06:33.002375 4840 scope.go:117] "RemoveContainer" containerID="8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548" Jan 29 12:06:33 crc kubenswrapper[4840]: E0129 12:06:33.002432 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:33 crc kubenswrapper[4840]: E0129 12:06:33.002893 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:33 crc kubenswrapper[4840]: I0129 12:06:33.002382 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:33 crc kubenswrapper[4840]: E0129 12:06:33.003112 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:33 crc kubenswrapper[4840]: I0129 12:06:33.712598 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/3.log" Jan 29 12:06:33 crc kubenswrapper[4840]: I0129 12:06:33.715917 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerStarted","Data":"3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc"} Jan 29 12:06:33 crc kubenswrapper[4840]: I0129 12:06:33.716799 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:06:33 crc kubenswrapper[4840]: I0129 12:06:33.946900 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podStartSLOduration=102.946881684 podStartE2EDuration="1m42.946881684s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:06:33.74338673 +0000 UTC m=+125.406366623" watchObservedRunningTime="2026-01-29 12:06:33.946881684 +0000 UTC m=+125.609861577" Jan 29 12:06:33 crc kubenswrapper[4840]: I0129 12:06:33.947734 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mnzvc"] Jan 29 12:06:33 crc kubenswrapper[4840]: I0129 12:06:33.947812 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:33 crc kubenswrapper[4840]: E0129 12:06:33.947889 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:34 crc kubenswrapper[4840]: E0129 12:06:34.132256 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 12:06:35 crc kubenswrapper[4840]: I0129 12:06:35.000677 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:35 crc kubenswrapper[4840]: I0129 12:06:35.000723 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:35 crc kubenswrapper[4840]: I0129 12:06:35.000981 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:35 crc kubenswrapper[4840]: E0129 12:06:35.001033 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:35 crc kubenswrapper[4840]: E0129 12:06:35.001153 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:35 crc kubenswrapper[4840]: E0129 12:06:35.001214 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:36 crc kubenswrapper[4840]: I0129 12:06:36.000205 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:36 crc kubenswrapper[4840]: E0129 12:06:36.000381 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:37 crc kubenswrapper[4840]: I0129 12:06:37.001233 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:37 crc kubenswrapper[4840]: I0129 12:06:37.001336 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:37 crc kubenswrapper[4840]: E0129 12:06:37.001483 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:37 crc kubenswrapper[4840]: I0129 12:06:37.001936 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:37 crc kubenswrapper[4840]: E0129 12:06:37.002085 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:37 crc kubenswrapper[4840]: E0129 12:06:37.001927 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:38 crc kubenswrapper[4840]: I0129 12:06:38.001260 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:38 crc kubenswrapper[4840]: E0129 12:06:38.001830 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:39 crc kubenswrapper[4840]: I0129 12:06:39.000555 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:39 crc kubenswrapper[4840]: I0129 12:06:39.000620 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:39 crc kubenswrapper[4840]: I0129 12:06:39.000766 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:39 crc kubenswrapper[4840]: E0129 12:06:39.001741 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:39 crc kubenswrapper[4840]: E0129 12:06:39.001904 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:39 crc kubenswrapper[4840]: E0129 12:06:39.002127 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:39 crc kubenswrapper[4840]: E0129 12:06:39.133040 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 12:06:40 crc kubenswrapper[4840]: I0129 12:06:40.000981 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:40 crc kubenswrapper[4840]: E0129 12:06:40.001180 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:41 crc kubenswrapper[4840]: I0129 12:06:41.001337 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:41 crc kubenswrapper[4840]: I0129 12:06:41.001444 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:41 crc kubenswrapper[4840]: E0129 12:06:41.001517 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:41 crc kubenswrapper[4840]: E0129 12:06:41.001691 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:41 crc kubenswrapper[4840]: I0129 12:06:41.001732 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:41 crc kubenswrapper[4840]: E0129 12:06:41.001806 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:42 crc kubenswrapper[4840]: I0129 12:06:42.000884 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:42 crc kubenswrapper[4840]: E0129 12:06:42.001187 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:43 crc kubenswrapper[4840]: I0129 12:06:43.000822 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:43 crc kubenswrapper[4840]: I0129 12:06:43.001018 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:43 crc kubenswrapper[4840]: I0129 12:06:43.001770 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:43 crc kubenswrapper[4840]: E0129 12:06:43.001932 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:43 crc kubenswrapper[4840]: E0129 12:06:43.002184 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:43 crc kubenswrapper[4840]: E0129 12:06:43.002299 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:43 crc kubenswrapper[4840]: I0129 12:06:43.004209 4840 scope.go:117] "RemoveContainer" containerID="ff435f9ad41605de7d8db59d05550dd1a331288be6ba4826196a970d5af81a06" Jan 29 12:06:43 crc kubenswrapper[4840]: I0129 12:06:43.754308 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zc5r_d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969/kube-multus/1.log" Jan 29 12:06:43 crc kubenswrapper[4840]: I0129 12:06:43.754661 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zc5r" event={"ID":"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969","Type":"ContainerStarted","Data":"2a2fc0da70c75ed38629049bf5f40c727f55fba7a52031ca48c5e3d8ada6f6fe"} Jan 29 12:06:44 crc kubenswrapper[4840]: I0129 12:06:44.000176 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:44 crc kubenswrapper[4840]: E0129 12:06:44.000295 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:44 crc kubenswrapper[4840]: E0129 12:06:44.134266 4840 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 29 12:06:45 crc kubenswrapper[4840]: I0129 12:06:45.000891 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:45 crc kubenswrapper[4840]: I0129 12:06:45.000930 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:45 crc kubenswrapper[4840]: I0129 12:06:45.000891 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:45 crc kubenswrapper[4840]: E0129 12:06:45.001049 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:45 crc kubenswrapper[4840]: E0129 12:06:45.001125 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:45 crc kubenswrapper[4840]: E0129 12:06:45.001181 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:46 crc kubenswrapper[4840]: I0129 12:06:46.000735 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:46 crc kubenswrapper[4840]: E0129 12:06:46.000871 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:47 crc kubenswrapper[4840]: I0129 12:06:47.000606 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:47 crc kubenswrapper[4840]: I0129 12:06:47.000625 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:47 crc kubenswrapper[4840]: I0129 12:06:47.000702 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:47 crc kubenswrapper[4840]: E0129 12:06:47.001516 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:47 crc kubenswrapper[4840]: E0129 12:06:47.001638 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:47 crc kubenswrapper[4840]: E0129 12:06:47.001774 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:48 crc kubenswrapper[4840]: I0129 12:06:48.001133 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:48 crc kubenswrapper[4840]: E0129 12:06:48.001413 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-mnzvc" podUID="c69a828d-5ed4-45ac-95a4-f0cc698d6992" Jan 29 12:06:49 crc kubenswrapper[4840]: I0129 12:06:49.000814 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:49 crc kubenswrapper[4840]: E0129 12:06:49.003290 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 29 12:06:49 crc kubenswrapper[4840]: I0129 12:06:49.003331 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:49 crc kubenswrapper[4840]: I0129 12:06:49.003405 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:49 crc kubenswrapper[4840]: E0129 12:06:49.003528 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 29 12:06:49 crc kubenswrapper[4840]: E0129 12:06:49.003882 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 29 12:06:50 crc kubenswrapper[4840]: I0129 12:06:50.000664 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:06:50 crc kubenswrapper[4840]: I0129 12:06:50.004002 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 12:06:50 crc kubenswrapper[4840]: I0129 12:06:50.004018 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 12:06:51 crc kubenswrapper[4840]: I0129 12:06:51.001379 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:51 crc kubenswrapper[4840]: I0129 12:06:51.001506 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:51 crc kubenswrapper[4840]: I0129 12:06:51.001412 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:51 crc kubenswrapper[4840]: I0129 12:06:51.004568 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 12:06:51 crc kubenswrapper[4840]: I0129 12:06:51.004855 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 12:06:51 crc kubenswrapper[4840]: I0129 12:06:51.005386 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 12:06:51 crc kubenswrapper[4840]: I0129 12:06:51.005414 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 12:06:53 crc kubenswrapper[4840]: I0129 12:06:53.522124 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:06:53 crc kubenswrapper[4840]: I0129 12:06:53.522228 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:06:56 crc kubenswrapper[4840]: I0129 12:06:56.548102 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:06:56 crc kubenswrapper[4840]: E0129 12:06:56.548403 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:08:58.54836205 +0000 UTC m=+270.211342013 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:06:56 crc kubenswrapper[4840]: I0129 12:06:56.650241 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:56 crc kubenswrapper[4840]: I0129 12:06:56.650383 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:56 crc kubenswrapper[4840]: I0129 12:06:56.650459 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:56 crc kubenswrapper[4840]: I0129 12:06:56.650547 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:56 crc kubenswrapper[4840]: I0129 12:06:56.654088 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:56 crc kubenswrapper[4840]: I0129 12:06:56.663979 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:56 crc kubenswrapper[4840]: I0129 12:06:56.664313 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:56 crc kubenswrapper[4840]: I0129 12:06:56.665103 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:56 crc kubenswrapper[4840]: I0129 12:06:56.718880 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 29 12:06:56 crc kubenswrapper[4840]: I0129 12:06:56.729449 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:56 crc kubenswrapper[4840]: I0129 12:06:56.736892 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 29 12:06:56 crc kubenswrapper[4840]: W0129 12:06:56.998818 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-ca13535dd7dcb5c2c17805fb874f3cec912a0173300836fda3e4bd9b32c220e5 WatchSource:0}: Error finding container ca13535dd7dcb5c2c17805fb874f3cec912a0173300836fda3e4bd9b32c220e5: Status 404 returned error can't find the container with id ca13535dd7dcb5c2c17805fb874f3cec912a0173300836fda3e4bd9b32c220e5 Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.016560 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ca13535dd7dcb5c2c17805fb874f3cec912a0173300836fda3e4bd9b32c220e5"} Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.017570 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"075822966eff4944e5f56c5f764c513b1c6af34d96ca569c49df6710b31bda78"} Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.018380 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"4e46dcdd80f5a32493de36de36aaa1685c18eb4d133011c1bd9e7d84038b61e9"} Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.524957 4840 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.567409 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bg94b"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.568224 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.568449 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rjjwm"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.569051 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.570768 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.570941 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.574655 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.574809 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.574841 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.575390 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.576141 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.576645 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.576934 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.577204 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.578616 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.579636 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.580565 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.586215 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.610904 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-ksbxx"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.611683 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ksbxx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.612730 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.613571 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-dltrs"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.614006 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.615317 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.615512 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.615988 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-98jrd"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.616700 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.617369 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.621261 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.621893 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-l64hx"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.622277 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.623074 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.623832 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.624098 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.624129 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.624347 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.624460 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.624543 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.627516 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.629788 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.631356 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.631557 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.634824 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.635146 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.635365 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.635608 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.635752 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.636073 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.636465 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.636652 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.637467 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.637569 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.637679 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.637802 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.637840 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.637977 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.638024 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.638108 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.638175 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.637984 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.638378 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.639120 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gj6zv"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.639166 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.639507 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.640822 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.641755 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.642038 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.642294 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-thmqb"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.642915 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.645109 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.645244 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.645430 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.645546 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.645657 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.645791 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.645986 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.646164 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.646921 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.647159 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.649087 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.649613 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.650049 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.650227 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8pbcl"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.650605 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.651374 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hxjgh"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.675980 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-hxjgh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.681133 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c960c9b6-a14e-45f6-9adb-a3944ee2c575-config\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.703244 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.703450 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.705743 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.706402 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.706977 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.705758 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.707392 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.705875 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.706028 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.706144 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.706152 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.706214 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.706240 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.706545 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.706701 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.710843 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.706775 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.706859 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.711184 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.711266 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.711350 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.711386 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.711477 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.711531 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.711637 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.711763 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712066 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712081 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712202 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712329 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712415 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712332 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712429 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712523 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712620 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712658 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712718 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712803 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712891 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.712999 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.713106 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.713129 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c960c9b6-a14e-45f6-9adb-a3944ee2c575-service-ca-bundle\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.713203 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.713644 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.713763 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.713857 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.713984 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.714095 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.714191 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.714500 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/405b2371-c32c-4697-8c78-35c6a19a8b7a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wc8x4\" (UID: \"405b2371-c32c-4697-8c78-35c6a19a8b7a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.715468 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.716354 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bg94b"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.716412 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fb75\" (UniqueName: \"kubernetes.io/projected/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-kube-api-access-6fb75\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.716499 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.717385 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.718083 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.721007 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7crbn\" (UniqueName: \"kubernetes.io/projected/c960c9b6-a14e-45f6-9adb-a3944ee2c575-kube-api-access-7crbn\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.721146 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.721181 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-client-ca\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.721208 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c960c9b6-a14e-45f6-9adb-a3944ee2c575-serving-cert\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.721241 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-config\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.721272 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c960c9b6-a14e-45f6-9adb-a3944ee2c575-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.721299 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/405b2371-c32c-4697-8c78-35c6a19a8b7a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wc8x4\" (UID: \"405b2371-c32c-4697-8c78-35c6a19a8b7a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.721338 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-serving-cert\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.721366 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd8fh\" (UniqueName: \"kubernetes.io/projected/405b2371-c32c-4697-8c78-35c6a19a8b7a-kube-api-access-fd8fh\") pod \"cluster-image-registry-operator-dc59b4c8b-wc8x4\" (UID: \"405b2371-c32c-4697-8c78-35c6a19a8b7a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.721397 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/405b2371-c32c-4697-8c78-35c6a19a8b7a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wc8x4\" (UID: \"405b2371-c32c-4697-8c78-35c6a19a8b7a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.722894 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.723401 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.724191 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.727109 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.729330 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-dz9sd"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.729703 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.729998 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.730482 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.730844 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.731800 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.735298 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.735533 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.740397 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.742926 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.743246 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bj49k"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.743752 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.747674 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8fwb9"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.749060 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bj49k" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.750824 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.755728 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.758480 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.759795 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.764692 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.770594 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.774043 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.782802 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q4xmw"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.783386 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.784900 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.785966 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q4xmw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.791360 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.805130 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.805804 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.806014 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.805844 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-799dp"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.806664 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.806829 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.807591 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.811302 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.815004 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rjjwm"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.816517 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.817427 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.817522 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.818347 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.819860 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qdhws"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822160 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822207 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c960c9b6-a14e-45f6-9adb-a3944ee2c575-config\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822238 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/619b4a63-b1aa-4d9f-a662-043696d19d1e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n9v9c\" (UID: \"619b4a63-b1aa-4d9f-a662-043696d19d1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822270 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8jv\" (UniqueName: \"kubernetes.io/projected/6b0b281a-a337-4ff1-b8e5-3760e3af5b20-kube-api-access-jp8jv\") pod \"console-operator-58897d9998-gj6zv\" (UID: \"6b0b281a-a337-4ff1-b8e5-3760e3af5b20\") " pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822337 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48xk\" (UniqueName: \"kubernetes.io/projected/7d571f03-5043-4b23-bfe9-7d82584ef243-kube-api-access-w48xk\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822357 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3ae5b49d-5291-467c-ae2f-e69d8368dc1f-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4sgs\" (UID: \"3ae5b49d-5291-467c-ae2f-e69d8368dc1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822381 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dkf7\" (UniqueName: \"kubernetes.io/projected/03d4e817-3cad-4efb-ad16-dafdc1c56c8b-kube-api-access-2dkf7\") pod \"machine-api-operator-5694c8668f-l64hx\" (UID: \"03d4e817-3cad-4efb-ad16-dafdc1c56c8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822398 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-audit-policies\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822421 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c960c9b6-a14e-45f6-9adb-a3944ee2c575-service-ca-bundle\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822448 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-etcd-serving-ca\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822476 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/03d4e817-3cad-4efb-ad16-dafdc1c56c8b-images\") pod \"machine-api-operator-5694c8668f-l64hx\" (UID: \"03d4e817-3cad-4efb-ad16-dafdc1c56c8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822497 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3ae5b49d-5291-467c-ae2f-e69d8368dc1f-srv-cert\") pod \"catalog-operator-68c6474976-r4sgs\" (UID: \"3ae5b49d-5291-467c-ae2f-e69d8368dc1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822520 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntchp\" (UniqueName: \"kubernetes.io/projected/f1a38ee6-33eb-4c68-955a-1253ef95d412-kube-api-access-ntchp\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822544 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h7qb\" (UniqueName: \"kubernetes.io/projected/8d6ee34e-1da3-4bf3-b084-15e2be838d1b-kube-api-access-8h7qb\") pod \"machine-approver-56656f9798-wcprw\" (UID: \"8d6ee34e-1da3-4bf3-b084-15e2be838d1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822567 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/083790b8-20fc-4566-b7c9-e5eb39e22b8a-trusted-ca\") pod \"ingress-operator-5b745b69d9-d6bqw\" (UID: \"083790b8-20fc-4566-b7c9-e5eb39e22b8a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822589 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822616 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wpvz\" (UniqueName: \"kubernetes.io/projected/6465231e-0bca-475f-a603-b55c8c37d810-kube-api-access-7wpvz\") pod \"cluster-samples-operator-665b6dd947-r7skm\" (UID: \"6465231e-0bca-475f-a603-b55c8c37d810\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822639 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqxht\" (UniqueName: \"kubernetes.io/projected/17039ecb-77ce-4ee6-b6e1-dba1aebc2c53-kube-api-access-lqxht\") pod \"openshift-config-operator-7777fb866f-gl7sc\" (UID: \"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822666 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b827e0-ff80-4021-9b29-72935f8fe30b-client-ca\") pod \"route-controller-manager-6576b87f9c-8pndg\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822692 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9kvc\" (UniqueName: \"kubernetes.io/projected/083790b8-20fc-4566-b7c9-e5eb39e22b8a-kube-api-access-s9kvc\") pod \"ingress-operator-5b745b69d9-d6bqw\" (UID: \"083790b8-20fc-4566-b7c9-e5eb39e22b8a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822721 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1efab996-1c86-4d07-882c-45ee5f49ffe6-proxy-tls\") pod \"machine-config-controller-84d6567774-pnlbm\" (UID: \"1efab996-1c86-4d07-882c-45ee5f49ffe6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822784 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d6ee34e-1da3-4bf3-b084-15e2be838d1b-config\") pod \"machine-approver-56656f9798-wcprw\" (UID: \"8d6ee34e-1da3-4bf3-b084-15e2be838d1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822838 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7707ef9-e15f-44ab-8008-2126af490048-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xqhk2\" (UID: \"e7707ef9-e15f-44ab-8008-2126af490048\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822861 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnm8k\" (UniqueName: \"kubernetes.io/projected/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-kube-api-access-tnm8k\") pod \"service-ca-9c57cc56f-799dp\" (UID: \"1a255a6c-daa4-4b82-b7bb-6d8768feb78c\") " pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822885 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqdsf\" (UniqueName: \"kubernetes.io/projected/03b827e0-ff80-4021-9b29-72935f8fe30b-kube-api-access-hqdsf\") pod \"route-controller-manager-6576b87f9c-8pndg\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822907 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e9c5ce-97d8-4eed-a28c-aba69429bcfd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-k8jvz\" (UID: \"b0e9c5ce-97d8-4eed-a28c-aba69429bcfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822929 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/567d236c-6d37-488c-ace0-b8986046e68b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2r86v\" (UID: \"567d236c-6d37-488c-ace0-b8986046e68b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.822992 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823015 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsnb8\" (UniqueName: \"kubernetes.io/projected/df2082a0-52e3-4557-bdbf-9f5f654f00b4-kube-api-access-tsnb8\") pod \"downloads-7954f5f757-ksbxx\" (UID: \"df2082a0-52e3-4557-bdbf-9f5f654f00b4\") " pod="openshift-console/downloads-7954f5f757-ksbxx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823036 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b827e0-ff80-4021-9b29-72935f8fe30b-config\") pod \"route-controller-manager-6576b87f9c-8pndg\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823056 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxwr2\" (UniqueName: \"kubernetes.io/projected/e11e87ad-33de-4aeb-9742-e70801a6d526-kube-api-access-jxwr2\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823075 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbf7z\" (UniqueName: \"kubernetes.io/projected/db1a668d-d1f8-476c-846c-63fec292a9db-kube-api-access-qbf7z\") pod \"dns-operator-744455d44c-hxjgh\" (UID: \"db1a668d-d1f8-476c-846c-63fec292a9db\") " pod="openshift-dns-operator/dns-operator-744455d44c-hxjgh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823104 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-client-ca\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823123 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e11e87ad-33de-4aeb-9742-e70801a6d526-etcd-client\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823142 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b0b281a-a337-4ff1-b8e5-3760e3af5b20-trusted-ca\") pod \"console-operator-58897d9998-gj6zv\" (UID: \"6b0b281a-a337-4ff1-b8e5-3760e3af5b20\") " pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823160 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-trusted-ca-bundle\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823181 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggnqs\" (UniqueName: \"kubernetes.io/projected/619b4a63-b1aa-4d9f-a662-043696d19d1e-kube-api-access-ggnqs\") pod \"openshift-apiserver-operator-796bbdcf4f-n9v9c\" (UID: \"619b4a63-b1aa-4d9f-a662-043696d19d1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823237 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-serving-cert\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823257 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xklxv\" (UniqueName: \"kubernetes.io/projected/83d6878b-0209-421c-bff8-2e02c478e338-kube-api-access-xklxv\") pod \"migrator-59844c95c7-q4xmw\" (UID: \"83d6878b-0209-421c-bff8-2e02c478e338\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q4xmw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823278 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c960c9b6-a14e-45f6-9adb-a3944ee2c575-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823291 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c960c9b6-a14e-45f6-9adb-a3944ee2c575-config\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823297 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/405b2371-c32c-4697-8c78-35c6a19a8b7a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wc8x4\" (UID: \"405b2371-c32c-4697-8c78-35c6a19a8b7a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823368 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e11e87ad-33de-4aeb-9742-e70801a6d526-encryption-config\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823399 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-config\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823436 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-signing-cabundle\") pod \"service-ca-9c57cc56f-799dp\" (UID: \"1a255a6c-daa4-4b82-b7bb-6d8768feb78c\") " pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823460 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1a38ee6-33eb-4c68-955a-1253ef95d412-metrics-certs\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823480 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823499 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/663d1816-5db6-4cd5-9ed4-1e8353228748-etcd-client\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823518 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-config\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823536 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db1a668d-d1f8-476c-846c-63fec292a9db-metrics-tls\") pod \"dns-operator-744455d44c-hxjgh\" (UID: \"db1a668d-d1f8-476c-846c-63fec292a9db\") " pod="openshift-dns-operator/dns-operator-744455d44c-hxjgh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823565 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd8fh\" (UniqueName: \"kubernetes.io/projected/405b2371-c32c-4697-8c78-35c6a19a8b7a-kube-api-access-fd8fh\") pod \"cluster-image-registry-operator-dc59b4c8b-wc8x4\" (UID: \"405b2371-c32c-4697-8c78-35c6a19a8b7a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823588 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d571f03-5043-4b23-bfe9-7d82584ef243-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823606 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/663d1816-5db6-4cd5-9ed4-1e8353228748-kube-api-access-nz8l2\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823628 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f1a38ee6-33eb-4c68-955a-1253ef95d412-default-certificate\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823647 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8d6ee34e-1da3-4bf3-b084-15e2be838d1b-machine-approver-tls\") pod \"machine-approver-56656f9798-wcprw\" (UID: \"8d6ee34e-1da3-4bf3-b084-15e2be838d1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823665 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/03d4e817-3cad-4efb-ad16-dafdc1c56c8b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-l64hx\" (UID: \"03d4e817-3cad-4efb-ad16-dafdc1c56c8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823684 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj2pz\" (UniqueName: \"kubernetes.io/projected/3ae5b49d-5291-467c-ae2f-e69d8368dc1f-kube-api-access-wj2pz\") pod \"catalog-operator-68c6474976-r4sgs\" (UID: \"3ae5b49d-5291-467c-ae2f-e69d8368dc1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823704 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e11e87ad-33de-4aeb-9742-e70801a6d526-audit-dir\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823722 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c960c9b6-a14e-45f6-9adb-a3944ee2c575-service-ca-bundle\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823777 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-image-import-ca\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823828 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qwth\" (UniqueName: \"kubernetes.io/projected/b0e9c5ce-97d8-4eed-a28c-aba69429bcfd-kube-api-access-6qwth\") pod \"openshift-controller-manager-operator-756b6f6bc6-k8jvz\" (UID: \"b0e9c5ce-97d8-4eed-a28c-aba69429bcfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823865 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbksv\" (UniqueName: \"kubernetes.io/projected/b7da4209-9141-464e-a112-51470c4785d6-kube-api-access-vbksv\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823900 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7d571f03-5043-4b23-bfe9-7d82584ef243-audit-policies\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823930 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/663d1816-5db6-4cd5-9ed4-1e8353228748-config\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.823982 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/083790b8-20fc-4566-b7c9-e5eb39e22b8a-metrics-tls\") pod \"ingress-operator-5b745b69d9-d6bqw\" (UID: \"083790b8-20fc-4566-b7c9-e5eb39e22b8a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.824012 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/619b4a63-b1aa-4d9f-a662-043696d19d1e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n9v9c\" (UID: \"619b4a63-b1aa-4d9f-a662-043696d19d1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.824987 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d6ee34e-1da3-4bf3-b084-15e2be838d1b-auth-proxy-config\") pod \"machine-approver-56656f9798-wcprw\" (UID: \"8d6ee34e-1da3-4bf3-b084-15e2be838d1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.825037 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/405b2371-c32c-4697-8c78-35c6a19a8b7a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wc8x4\" (UID: \"405b2371-c32c-4697-8c78-35c6a19a8b7a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.825075 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f1a38ee6-33eb-4c68-955a-1253ef95d412-stats-auth\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.825475 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7d571f03-5043-4b23-bfe9-7d82584ef243-etcd-client\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.825544 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fb75\" (UniqueName: \"kubernetes.io/projected/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-kube-api-access-6fb75\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.825583 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.825646 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7707ef9-e15f-44ab-8008-2126af490048-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xqhk2\" (UID: \"e7707ef9-e15f-44ab-8008-2126af490048\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.825679 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-service-ca\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.825707 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq9m9\" (UniqueName: \"kubernetes.io/projected/fe84cd00-e72e-446a-981d-5f7c0e9304af-kube-api-access-zq9m9\") pod \"collect-profiles-29494800-d6wvl\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.825710 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-client-ca\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.825796 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.825851 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcfac6d9-7960-4572-b7bc-ded6694a3733-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9qw27\" (UID: \"dcfac6d9-7960-4572-b7bc-ded6694a3733\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.825896 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk9bg\" (UniqueName: \"kubernetes.io/projected/8f623a10-e56f-42f3-8d77-a1b6f083712a-kube-api-access-fk9bg\") pod \"multus-admission-controller-857f4d67dd-bj49k\" (UID: \"8f623a10-e56f-42f3-8d77-a1b6f083712a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bj49k" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826006 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/663d1816-5db6-4cd5-9ed4-1e8353228748-etcd-service-ca\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826105 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d571f03-5043-4b23-bfe9-7d82584ef243-serving-cert\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826174 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7crbn\" (UniqueName: \"kubernetes.io/projected/c960c9b6-a14e-45f6-9adb-a3944ee2c575-kube-api-access-7crbn\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826209 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b0b281a-a337-4ff1-b8e5-3760e3af5b20-config\") pod \"console-operator-58897d9998-gj6zv\" (UID: \"6b0b281a-a337-4ff1-b8e5-3760e3af5b20\") " pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826374 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b0b281a-a337-4ff1-b8e5-3760e3af5b20-serving-cert\") pod \"console-operator-58897d9998-gj6zv\" (UID: \"6b0b281a-a337-4ff1-b8e5-3760e3af5b20\") " pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826465 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7d571f03-5043-4b23-bfe9-7d82584ef243-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826529 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7707ef9-e15f-44ab-8008-2126af490048-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xqhk2\" (UID: \"e7707ef9-e15f-44ab-8008-2126af490048\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826715 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7da4209-9141-464e-a112-51470c4785d6-audit-dir\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826760 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs7fk\" (UniqueName: \"kubernetes.io/projected/dcfac6d9-7960-4572-b7bc-ded6694a3733-kube-api-access-xs7fk\") pod \"package-server-manager-789f6589d5-9qw27\" (UID: \"dcfac6d9-7960-4572-b7bc-ded6694a3733\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826786 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826760 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cwqw5"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826848 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4chl\" (UniqueName: \"kubernetes.io/projected/1efab996-1c86-4d07-882c-45ee5f49ffe6-kube-api-access-w4chl\") pod \"machine-config-controller-84d6567774-pnlbm\" (UID: \"1efab996-1c86-4d07-882c-45ee5f49ffe6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826886 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826915 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567d236c-6d37-488c-ace0-b8986046e68b-config\") pod \"kube-apiserver-operator-766d6c64bb-2r86v\" (UID: \"567d236c-6d37-488c-ace0-b8986046e68b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.826982 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe84cd00-e72e-446a-981d-5f7c0e9304af-secret-volume\") pod \"collect-profiles-29494800-d6wvl\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.827014 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e9c5ce-97d8-4eed-a28c-aba69429bcfd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-k8jvz\" (UID: \"b0e9c5ce-97d8-4eed-a28c-aba69429bcfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.827042 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d571f03-5043-4b23-bfe9-7d82584ef243-audit-dir\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.827086 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e11e87ad-33de-4aeb-9742-e70801a6d526-serving-cert\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.827835 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/083790b8-20fc-4566-b7c9-e5eb39e22b8a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d6bqw\" (UID: \"083790b8-20fc-4566-b7c9-e5eb39e22b8a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.827905 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-config\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828013 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8f623a10-e56f-42f3-8d77-a1b6f083712a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bj49k\" (UID: \"8f623a10-e56f-42f3-8d77-a1b6f083712a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bj49k" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828069 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c960c9b6-a14e-45f6-9adb-a3944ee2c575-serving-cert\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828601 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/405b2371-c32c-4697-8c78-35c6a19a8b7a-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-wc8x4\" (UID: \"405b2371-c32c-4697-8c78-35c6a19a8b7a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828704 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/405b2371-c32c-4697-8c78-35c6a19a8b7a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-wc8x4\" (UID: \"405b2371-c32c-4697-8c78-35c6a19a8b7a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828757 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828785 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828805 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828818 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828832 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828784 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e11e87ad-33de-4aeb-9742-e70801a6d526-node-pullsecrets\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828883 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828921 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.828926 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.829553 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-98jrd"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.829701 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-config\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.832119 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c960c9b6-a14e-45f6-9adb-a3944ee2c575-serving-cert\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.833698 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.834534 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe84cd00-e72e-446a-981d-5f7c0e9304af-config-volume\") pod \"collect-profiles-29494800-d6wvl\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.834675 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnw4t\" (UniqueName: \"kubernetes.io/projected/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-kube-api-access-cnw4t\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.834705 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-audit\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.834750 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-oauth-serving-cert\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.834776 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17039ecb-77ce-4ee6-b6e1-dba1aebc2c53-serving-cert\") pod \"openshift-config-operator-7777fb866f-gl7sc\" (UID: \"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.834797 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b827e0-ff80-4021-9b29-72935f8fe30b-serving-cert\") pod \"route-controller-manager-6576b87f9c-8pndg\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.834881 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-trusted-ca-bundle\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.834906 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7d571f03-5043-4b23-bfe9-7d82584ef243-encryption-config\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.834924 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567d236c-6d37-488c-ace0-b8986046e68b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2r86v\" (UID: \"567d236c-6d37-488c-ace0-b8986046e68b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.834965 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-serving-cert\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.834985 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835002 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835054 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/663d1816-5db6-4cd5-9ed4-1e8353228748-etcd-ca\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835076 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1efab996-1c86-4d07-882c-45ee5f49ffe6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pnlbm\" (UID: \"1efab996-1c86-4d07-882c-45ee5f49ffe6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835094 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03d4e817-3cad-4efb-ad16-dafdc1c56c8b-config\") pod \"machine-api-operator-5694c8668f-l64hx\" (UID: \"03d4e817-3cad-4efb-ad16-dafdc1c56c8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835115 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6465231e-0bca-475f-a603-b55c8c37d810-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r7skm\" (UID: \"6465231e-0bca-475f-a603-b55c8c37d810\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835135 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-oauth-config\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835506 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c960c9b6-a14e-45f6-9adb-a3944ee2c575-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835643 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/405b2371-c32c-4697-8c78-35c6a19a8b7a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wc8x4\" (UID: \"405b2371-c32c-4697-8c78-35c6a19a8b7a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835716 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/17039ecb-77ce-4ee6-b6e1-dba1aebc2c53-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gl7sc\" (UID: \"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835747 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1a38ee6-33eb-4c68-955a-1253ef95d412-service-ca-bundle\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835776 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/663d1816-5db6-4cd5-9ed4-1e8353228748-serving-cert\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835807 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-signing-key\") pod \"service-ca-9c57cc56f-799dp\" (UID: \"1a255a6c-daa4-4b82-b7bb-6d8768feb78c\") " pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.835833 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.836209 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.838063 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-serving-cert\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.846707 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gj6zv"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.849124 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.850045 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ksbxx"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.851094 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.851608 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dltrs"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.852783 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-thmqb"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.853899 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.855229 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.856298 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-78t8x"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.857345 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.857954 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-x9hv2"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.858815 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x9hv2" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.859427 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-ndtmk"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.860210 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ndtmk" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.860824 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8fwb9"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.862458 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-l64hx"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.863623 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.864780 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.866673 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hxjgh"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.867871 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.870757 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.875700 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.877459 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-78t8x"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.879490 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8pbcl"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.881229 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bj49k"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.883288 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.885432 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.887639 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x9hv2"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.889529 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ndtmk"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.894042 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.905005 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.906659 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.908278 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q4xmw"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.910698 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cwqw5"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.911053 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.912195 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.913573 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.914819 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qdhws"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.916321 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.917596 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-799dp"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.918925 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.920493 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-m87r7"] Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.921485 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m87r7" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.931121 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937351 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wpvz\" (UniqueName: \"kubernetes.io/projected/6465231e-0bca-475f-a603-b55c8c37d810-kube-api-access-7wpvz\") pod \"cluster-samples-operator-665b6dd947-r7skm\" (UID: \"6465231e-0bca-475f-a603-b55c8c37d810\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937401 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqxht\" (UniqueName: \"kubernetes.io/projected/17039ecb-77ce-4ee6-b6e1-dba1aebc2c53-kube-api-access-lqxht\") pod \"openshift-config-operator-7777fb866f-gl7sc\" (UID: \"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937429 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b827e0-ff80-4021-9b29-72935f8fe30b-client-ca\") pod \"route-controller-manager-6576b87f9c-8pndg\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937449 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9kvc\" (UniqueName: \"kubernetes.io/projected/083790b8-20fc-4566-b7c9-e5eb39e22b8a-kube-api-access-s9kvc\") pod \"ingress-operator-5b745b69d9-d6bqw\" (UID: \"083790b8-20fc-4566-b7c9-e5eb39e22b8a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937473 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1efab996-1c86-4d07-882c-45ee5f49ffe6-proxy-tls\") pod \"machine-config-controller-84d6567774-pnlbm\" (UID: \"1efab996-1c86-4d07-882c-45ee5f49ffe6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937491 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d6ee34e-1da3-4bf3-b084-15e2be838d1b-config\") pod \"machine-approver-56656f9798-wcprw\" (UID: \"8d6ee34e-1da3-4bf3-b084-15e2be838d1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937510 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqdsf\" (UniqueName: \"kubernetes.io/projected/03b827e0-ff80-4021-9b29-72935f8fe30b-kube-api-access-hqdsf\") pod \"route-controller-manager-6576b87f9c-8pndg\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937527 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7707ef9-e15f-44ab-8008-2126af490048-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xqhk2\" (UID: \"e7707ef9-e15f-44ab-8008-2126af490048\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937546 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnm8k\" (UniqueName: \"kubernetes.io/projected/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-kube-api-access-tnm8k\") pod \"service-ca-9c57cc56f-799dp\" (UID: \"1a255a6c-daa4-4b82-b7bb-6d8768feb78c\") " pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937564 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e9c5ce-97d8-4eed-a28c-aba69429bcfd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-k8jvz\" (UID: \"b0e9c5ce-97d8-4eed-a28c-aba69429bcfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937587 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsnb8\" (UniqueName: \"kubernetes.io/projected/df2082a0-52e3-4557-bdbf-9f5f654f00b4-kube-api-access-tsnb8\") pod \"downloads-7954f5f757-ksbxx\" (UID: \"df2082a0-52e3-4557-bdbf-9f5f654f00b4\") " pod="openshift-console/downloads-7954f5f757-ksbxx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937602 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b827e0-ff80-4021-9b29-72935f8fe30b-config\") pod \"route-controller-manager-6576b87f9c-8pndg\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937618 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/567d236c-6d37-488c-ace0-b8986046e68b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2r86v\" (UID: \"567d236c-6d37-488c-ace0-b8986046e68b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937637 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e11e87ad-33de-4aeb-9742-e70801a6d526-etcd-client\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937653 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxwr2\" (UniqueName: \"kubernetes.io/projected/e11e87ad-33de-4aeb-9742-e70801a6d526-kube-api-access-jxwr2\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937669 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbf7z\" (UniqueName: \"kubernetes.io/projected/db1a668d-d1f8-476c-846c-63fec292a9db-kube-api-access-qbf7z\") pod \"dns-operator-744455d44c-hxjgh\" (UID: \"db1a668d-d1f8-476c-846c-63fec292a9db\") " pod="openshift-dns-operator/dns-operator-744455d44c-hxjgh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937685 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ggnqs\" (UniqueName: \"kubernetes.io/projected/619b4a63-b1aa-4d9f-a662-043696d19d1e-kube-api-access-ggnqs\") pod \"openshift-apiserver-operator-796bbdcf4f-n9v9c\" (UID: \"619b4a63-b1aa-4d9f-a662-043696d19d1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937705 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b0b281a-a337-4ff1-b8e5-3760e3af5b20-trusted-ca\") pod \"console-operator-58897d9998-gj6zv\" (UID: \"6b0b281a-a337-4ff1-b8e5-3760e3af5b20\") " pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937725 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-trusted-ca-bundle\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937744 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e11e87ad-33de-4aeb-9742-e70801a6d526-encryption-config\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937762 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-serving-cert\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937778 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xklxv\" (UniqueName: \"kubernetes.io/projected/83d6878b-0209-421c-bff8-2e02c478e338-kube-api-access-xklxv\") pod \"migrator-59844c95c7-q4xmw\" (UID: \"83d6878b-0209-421c-bff8-2e02c478e338\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q4xmw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937797 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-config\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937817 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-signing-cabundle\") pod \"service-ca-9c57cc56f-799dp\" (UID: \"1a255a6c-daa4-4b82-b7bb-6d8768feb78c\") " pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937834 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1a38ee6-33eb-4c68-955a-1253ef95d412-metrics-certs\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937850 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db1a668d-d1f8-476c-846c-63fec292a9db-metrics-tls\") pod \"dns-operator-744455d44c-hxjgh\" (UID: \"db1a668d-d1f8-476c-846c-63fec292a9db\") " pod="openshift-dns-operator/dns-operator-744455d44c-hxjgh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937867 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937885 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/663d1816-5db6-4cd5-9ed4-1e8353228748-etcd-client\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937903 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-config\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937918 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/663d1816-5db6-4cd5-9ed4-1e8353228748-kube-api-access-nz8l2\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937935 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f1a38ee6-33eb-4c68-955a-1253ef95d412-default-certificate\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.937985 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d571f03-5043-4b23-bfe9-7d82584ef243-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938009 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/03d4e817-3cad-4efb-ad16-dafdc1c56c8b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-l64hx\" (UID: \"03d4e817-3cad-4efb-ad16-dafdc1c56c8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938031 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8d6ee34e-1da3-4bf3-b084-15e2be838d1b-machine-approver-tls\") pod \"machine-approver-56656f9798-wcprw\" (UID: \"8d6ee34e-1da3-4bf3-b084-15e2be838d1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938049 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e11e87ad-33de-4aeb-9742-e70801a6d526-audit-dir\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938097 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj2pz\" (UniqueName: \"kubernetes.io/projected/3ae5b49d-5291-467c-ae2f-e69d8368dc1f-kube-api-access-wj2pz\") pod \"catalog-operator-68c6474976-r4sgs\" (UID: \"3ae5b49d-5291-467c-ae2f-e69d8368dc1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938117 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-image-import-ca\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938143 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qwth\" (UniqueName: \"kubernetes.io/projected/b0e9c5ce-97d8-4eed-a28c-aba69429bcfd-kube-api-access-6qwth\") pod \"openshift-controller-manager-operator-756b6f6bc6-k8jvz\" (UID: \"b0e9c5ce-97d8-4eed-a28c-aba69429bcfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938169 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbksv\" (UniqueName: \"kubernetes.io/projected/b7da4209-9141-464e-a112-51470c4785d6-kube-api-access-vbksv\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938190 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/619b4a63-b1aa-4d9f-a662-043696d19d1e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n9v9c\" (UID: \"619b4a63-b1aa-4d9f-a662-043696d19d1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938208 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7d571f03-5043-4b23-bfe9-7d82584ef243-audit-policies\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938227 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/663d1816-5db6-4cd5-9ed4-1e8353228748-config\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938243 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/083790b8-20fc-4566-b7c9-e5eb39e22b8a-metrics-tls\") pod \"ingress-operator-5b745b69d9-d6bqw\" (UID: \"083790b8-20fc-4566-b7c9-e5eb39e22b8a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938265 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d6ee34e-1da3-4bf3-b084-15e2be838d1b-auth-proxy-config\") pod \"machine-approver-56656f9798-wcprw\" (UID: \"8d6ee34e-1da3-4bf3-b084-15e2be838d1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938283 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f1a38ee6-33eb-4c68-955a-1253ef95d412-stats-auth\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938300 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7d571f03-5043-4b23-bfe9-7d82584ef243-etcd-client\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938318 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7707ef9-e15f-44ab-8008-2126af490048-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xqhk2\" (UID: \"e7707ef9-e15f-44ab-8008-2126af490048\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938344 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938361 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-service-ca\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938378 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq9m9\" (UniqueName: \"kubernetes.io/projected/fe84cd00-e72e-446a-981d-5f7c0e9304af-kube-api-access-zq9m9\") pod \"collect-profiles-29494800-d6wvl\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938396 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938415 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcfac6d9-7960-4572-b7bc-ded6694a3733-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9qw27\" (UID: \"dcfac6d9-7960-4572-b7bc-ded6694a3733\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938440 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk9bg\" (UniqueName: \"kubernetes.io/projected/8f623a10-e56f-42f3-8d77-a1b6f083712a-kube-api-access-fk9bg\") pod \"multus-admission-controller-857f4d67dd-bj49k\" (UID: \"8f623a10-e56f-42f3-8d77-a1b6f083712a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bj49k" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938465 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d571f03-5043-4b23-bfe9-7d82584ef243-serving-cert\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938483 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/663d1816-5db6-4cd5-9ed4-1e8353228748-etcd-service-ca\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938521 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b0b281a-a337-4ff1-b8e5-3760e3af5b20-config\") pod \"console-operator-58897d9998-gj6zv\" (UID: \"6b0b281a-a337-4ff1-b8e5-3760e3af5b20\") " pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938542 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b0b281a-a337-4ff1-b8e5-3760e3af5b20-serving-cert\") pod \"console-operator-58897d9998-gj6zv\" (UID: \"6b0b281a-a337-4ff1-b8e5-3760e3af5b20\") " pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938566 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7d571f03-5043-4b23-bfe9-7d82584ef243-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938587 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7707ef9-e15f-44ab-8008-2126af490048-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xqhk2\" (UID: \"e7707ef9-e15f-44ab-8008-2126af490048\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938610 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7da4209-9141-464e-a112-51470c4785d6-audit-dir\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938627 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938646 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs7fk\" (UniqueName: \"kubernetes.io/projected/dcfac6d9-7960-4572-b7bc-ded6694a3733-kube-api-access-xs7fk\") pod \"package-server-manager-789f6589d5-9qw27\" (UID: \"dcfac6d9-7960-4572-b7bc-ded6694a3733\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938664 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4chl\" (UniqueName: \"kubernetes.io/projected/1efab996-1c86-4d07-882c-45ee5f49ffe6-kube-api-access-w4chl\") pod \"machine-config-controller-84d6567774-pnlbm\" (UID: \"1efab996-1c86-4d07-882c-45ee5f49ffe6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938740 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567d236c-6d37-488c-ace0-b8986046e68b-config\") pod \"kube-apiserver-operator-766d6c64bb-2r86v\" (UID: \"567d236c-6d37-488c-ace0-b8986046e68b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938762 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe84cd00-e72e-446a-981d-5f7c0e9304af-secret-volume\") pod \"collect-profiles-29494800-d6wvl\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938778 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e11e87ad-33de-4aeb-9742-e70801a6d526-serving-cert\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938794 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e9c5ce-97d8-4eed-a28c-aba69429bcfd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-k8jvz\" (UID: \"b0e9c5ce-97d8-4eed-a28c-aba69429bcfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938812 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d571f03-5043-4b23-bfe9-7d82584ef243-audit-dir\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938829 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/083790b8-20fc-4566-b7c9-e5eb39e22b8a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d6bqw\" (UID: \"083790b8-20fc-4566-b7c9-e5eb39e22b8a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938845 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8f623a10-e56f-42f3-8d77-a1b6f083712a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bj49k\" (UID: \"8f623a10-e56f-42f3-8d77-a1b6f083712a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bj49k" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938861 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e11e87ad-33de-4aeb-9742-e70801a6d526-node-pullsecrets\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938922 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938974 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.938993 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe84cd00-e72e-446a-981d-5f7c0e9304af-config-volume\") pod \"collect-profiles-29494800-d6wvl\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939012 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-oauth-serving-cert\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939031 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnw4t\" (UniqueName: \"kubernetes.io/projected/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-kube-api-access-cnw4t\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939049 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-audit\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939068 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17039ecb-77ce-4ee6-b6e1-dba1aebc2c53-serving-cert\") pod \"openshift-config-operator-7777fb866f-gl7sc\" (UID: \"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939085 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b827e0-ff80-4021-9b29-72935f8fe30b-serving-cert\") pod \"route-controller-manager-6576b87f9c-8pndg\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939103 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-trusted-ca-bundle\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939121 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939187 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939235 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7d571f03-5043-4b23-bfe9-7d82584ef243-encryption-config\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939254 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567d236c-6d37-488c-ace0-b8986046e68b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2r86v\" (UID: \"567d236c-6d37-488c-ace0-b8986046e68b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939274 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03d4e817-3cad-4efb-ad16-dafdc1c56c8b-config\") pod \"machine-api-operator-5694c8668f-l64hx\" (UID: \"03d4e817-3cad-4efb-ad16-dafdc1c56c8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939291 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/663d1816-5db6-4cd5-9ed4-1e8353228748-etcd-ca\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939308 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1efab996-1c86-4d07-882c-45ee5f49ffe6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pnlbm\" (UID: \"1efab996-1c86-4d07-882c-45ee5f49ffe6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939332 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6465231e-0bca-475f-a603-b55c8c37d810-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r7skm\" (UID: \"6465231e-0bca-475f-a603-b55c8c37d810\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939348 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-oauth-config\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939367 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1a38ee6-33eb-4c68-955a-1253ef95d412-service-ca-bundle\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939385 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/17039ecb-77ce-4ee6-b6e1-dba1aebc2c53-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gl7sc\" (UID: \"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939470 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/663d1816-5db6-4cd5-9ed4-1e8353228748-serving-cert\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939509 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-signing-key\") pod \"service-ca-9c57cc56f-799dp\" (UID: \"1a255a6c-daa4-4b82-b7bb-6d8768feb78c\") " pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939530 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939551 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939571 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dkf7\" (UniqueName: \"kubernetes.io/projected/03d4e817-3cad-4efb-ad16-dafdc1c56c8b-kube-api-access-2dkf7\") pod \"machine-api-operator-5694c8668f-l64hx\" (UID: \"03d4e817-3cad-4efb-ad16-dafdc1c56c8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939595 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/619b4a63-b1aa-4d9f-a662-043696d19d1e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n9v9c\" (UID: \"619b4a63-b1aa-4d9f-a662-043696d19d1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939620 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jp8jv\" (UniqueName: \"kubernetes.io/projected/6b0b281a-a337-4ff1-b8e5-3760e3af5b20-kube-api-access-jp8jv\") pod \"console-operator-58897d9998-gj6zv\" (UID: \"6b0b281a-a337-4ff1-b8e5-3760e3af5b20\") " pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939641 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w48xk\" (UniqueName: \"kubernetes.io/projected/7d571f03-5043-4b23-bfe9-7d82584ef243-kube-api-access-w48xk\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939767 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3ae5b49d-5291-467c-ae2f-e69d8368dc1f-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4sgs\" (UID: \"3ae5b49d-5291-467c-ae2f-e69d8368dc1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939801 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-audit-policies\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939826 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-etcd-serving-ca\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939849 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3ae5b49d-5291-467c-ae2f-e69d8368dc1f-srv-cert\") pod \"catalog-operator-68c6474976-r4sgs\" (UID: \"3ae5b49d-5291-467c-ae2f-e69d8368dc1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939873 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/03d4e817-3cad-4efb-ad16-dafdc1c56c8b-images\") pod \"machine-api-operator-5694c8668f-l64hx\" (UID: \"03d4e817-3cad-4efb-ad16-dafdc1c56c8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939896 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h7qb\" (UniqueName: \"kubernetes.io/projected/8d6ee34e-1da3-4bf3-b084-15e2be838d1b-kube-api-access-8h7qb\") pod \"machine-approver-56656f9798-wcprw\" (UID: \"8d6ee34e-1da3-4bf3-b084-15e2be838d1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939920 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/083790b8-20fc-4566-b7c9-e5eb39e22b8a-trusted-ca\") pod \"ingress-operator-5b745b69d9-d6bqw\" (UID: \"083790b8-20fc-4566-b7c9-e5eb39e22b8a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939938 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntchp\" (UniqueName: \"kubernetes.io/projected/f1a38ee6-33eb-4c68-955a-1253ef95d412-kube-api-access-ntchp\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.939986 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.943002 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/e11e87ad-33de-4aeb-9742-e70801a6d526-audit-dir\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.943470 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b827e0-ff80-4021-9b29-72935f8fe30b-client-ca\") pod \"route-controller-manager-6576b87f9c-8pndg\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.944528 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8d6ee34e-1da3-4bf3-b084-15e2be838d1b-config\") pod \"machine-approver-56656f9798-wcprw\" (UID: \"8d6ee34e-1da3-4bf3-b084-15e2be838d1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.944916 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6b0b281a-a337-4ff1-b8e5-3760e3af5b20-trusted-ca\") pod \"console-operator-58897d9998-gj6zv\" (UID: \"6b0b281a-a337-4ff1-b8e5-3760e3af5b20\") " pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.945009 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/619b4a63-b1aa-4d9f-a662-043696d19d1e-config\") pod \"openshift-apiserver-operator-796bbdcf4f-n9v9c\" (UID: \"619b4a63-b1aa-4d9f-a662-043696d19d1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.945071 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.945688 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7d571f03-5043-4b23-bfe9-7d82584ef243-audit-policies\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.946039 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-trusted-ca-bundle\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.946143 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/663d1816-5db6-4cd5-9ed4-1e8353228748-config\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.946281 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7da4209-9141-464e-a112-51470c4785d6-audit-dir\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.946358 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b827e0-ff80-4021-9b29-72935f8fe30b-config\") pod \"route-controller-manager-6576b87f9c-8pndg\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.946665 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-oauth-serving-cert\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.947025 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/17039ecb-77ce-4ee6-b6e1-dba1aebc2c53-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gl7sc\" (UID: \"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.947326 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8d6ee34e-1da3-4bf3-b084-15e2be838d1b-auth-proxy-config\") pod \"machine-approver-56656f9798-wcprw\" (UID: \"8d6ee34e-1da3-4bf3-b084-15e2be838d1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.947471 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-config\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.947599 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7d571f03-5043-4b23-bfe9-7d82584ef243-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.947774 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-audit\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.948644 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-service-ca\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.948673 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-config\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.948684 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.949298 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.949381 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0e9c5ce-97d8-4eed-a28c-aba69429bcfd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-k8jvz\" (UID: \"b0e9c5ce-97d8-4eed-a28c-aba69429bcfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.949440 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7d571f03-5043-4b23-bfe9-7d82584ef243-audit-dir\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.949503 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b0b281a-a337-4ff1-b8e5-3760e3af5b20-config\") pod \"console-operator-58897d9998-gj6zv\" (UID: \"6b0b281a-a337-4ff1-b8e5-3760e3af5b20\") " pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.949776 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.950058 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-audit-policies\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.950151 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-image-import-ca\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.950356 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0e9c5ce-97d8-4eed-a28c-aba69429bcfd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-k8jvz\" (UID: \"b0e9c5ce-97d8-4eed-a28c-aba69429bcfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.950499 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.950526 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/663d1816-5db6-4cd5-9ed4-1e8353228748-etcd-service-ca\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.950625 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/e11e87ad-33de-4aeb-9742-e70801a6d526-node-pullsecrets\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.951036 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/8d6ee34e-1da3-4bf3-b084-15e2be838d1b-machine-approver-tls\") pod \"machine-approver-56656f9798-wcprw\" (UID: \"8d6ee34e-1da3-4bf3-b084-15e2be838d1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.951492 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.952638 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.952663 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-etcd-serving-ca\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.952864 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/03d4e817-3cad-4efb-ad16-dafdc1c56c8b-images\") pod \"machine-api-operator-5694c8668f-l64hx\" (UID: \"03d4e817-3cad-4efb-ad16-dafdc1c56c8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.953034 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03d4e817-3cad-4efb-ad16-dafdc1c56c8b-config\") pod \"machine-api-operator-5694c8668f-l64hx\" (UID: \"03d4e817-3cad-4efb-ad16-dafdc1c56c8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.953607 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.954009 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b0b281a-a337-4ff1-b8e5-3760e3af5b20-serving-cert\") pod \"console-operator-58897d9998-gj6zv\" (UID: \"6b0b281a-a337-4ff1-b8e5-3760e3af5b20\") " pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.954148 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/663d1816-5db6-4cd5-9ed4-1e8353228748-etcd-ca\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.954208 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.954185 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/663d1816-5db6-4cd5-9ed4-1e8353228748-serving-cert\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.954350 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e11e87ad-33de-4aeb-9742-e70801a6d526-trusted-ca-bundle\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.954405 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d571f03-5043-4b23-bfe9-7d82584ef243-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.954542 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.954584 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d571f03-5043-4b23-bfe9-7d82584ef243-serving-cert\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.954901 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1efab996-1c86-4d07-882c-45ee5f49ffe6-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-pnlbm\" (UID: \"1efab996-1c86-4d07-882c-45ee5f49ffe6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.955834 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7d571f03-5043-4b23-bfe9-7d82584ef243-etcd-client\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.956338 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17039ecb-77ce-4ee6-b6e1-dba1aebc2c53-serving-cert\") pod \"openshift-config-operator-7777fb866f-gl7sc\" (UID: \"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.956430 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7d571f03-5043-4b23-bfe9-7d82584ef243-encryption-config\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.956479 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/663d1816-5db6-4cd5-9ed4-1e8353228748-etcd-client\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.956802 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-serving-cert\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.957546 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db1a668d-d1f8-476c-846c-63fec292a9db-metrics-tls\") pod \"dns-operator-744455d44c-hxjgh\" (UID: \"db1a668d-d1f8-476c-846c-63fec292a9db\") " pod="openshift-dns-operator/dns-operator-744455d44c-hxjgh" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.957643 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.957720 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6465231e-0bca-475f-a603-b55c8c37d810-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-r7skm\" (UID: \"6465231e-0bca-475f-a603-b55c8c37d810\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.957801 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e11e87ad-33de-4aeb-9742-e70801a6d526-serving-cert\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.958022 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/619b4a63-b1aa-4d9f-a662-043696d19d1e-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-n9v9c\" (UID: \"619b4a63-b1aa-4d9f-a662-043696d19d1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.958129 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.958373 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/e11e87ad-33de-4aeb-9742-e70801a6d526-encryption-config\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.958649 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e11e87ad-33de-4aeb-9742-e70801a6d526-etcd-client\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.958667 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/03d4e817-3cad-4efb-ad16-dafdc1c56c8b-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-l64hx\" (UID: \"03d4e817-3cad-4efb-ad16-dafdc1c56c8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.959724 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-oauth-config\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.960559 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b827e0-ff80-4021-9b29-72935f8fe30b-serving-cert\") pod \"route-controller-manager-6576b87f9c-8pndg\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.971633 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 12:06:57 crc kubenswrapper[4840]: I0129 12:06:57.993020 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.011141 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.025310 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"5c0fbaedce8b45cb77804ccc9a3c01a8ae3f4708959d439eb83bf63814dd9713"} Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.025593 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.027139 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"faca78c4cf8f03ad2dd60dfbe569311cb97c7b42602b1c47d0cf3c267de6c4ff"} Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.029859 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"6a5e865e62bc772b1758c29ec6fd9896bede895e45acd500c0a2dc015341e6d1"} Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.032891 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.051475 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.060307 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/083790b8-20fc-4566-b7c9-e5eb39e22b8a-metrics-tls\") pod \"ingress-operator-5b745b69d9-d6bqw\" (UID: \"083790b8-20fc-4566-b7c9-e5eb39e22b8a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.076367 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.082572 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/083790b8-20fc-4566-b7c9-e5eb39e22b8a-trusted-ca\") pod \"ingress-operator-5b745b69d9-d6bqw\" (UID: \"083790b8-20fc-4566-b7c9-e5eb39e22b8a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.092832 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.112064 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.118264 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567d236c-6d37-488c-ace0-b8986046e68b-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-2r86v\" (UID: \"567d236c-6d37-488c-ace0-b8986046e68b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.133733 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.151098 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.158702 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567d236c-6d37-488c-ace0-b8986046e68b-config\") pod \"kube-apiserver-operator-766d6c64bb-2r86v\" (UID: \"567d236c-6d37-488c-ace0-b8986046e68b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.172574 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.191589 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.212584 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.218682 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7707ef9-e15f-44ab-8008-2126af490048-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xqhk2\" (UID: \"e7707ef9-e15f-44ab-8008-2126af490048\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.231986 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.239172 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7707ef9-e15f-44ab-8008-2126af490048-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xqhk2\" (UID: \"e7707ef9-e15f-44ab-8008-2126af490048\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.250434 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.272495 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.291143 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.310601 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.331104 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.354747 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.371972 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.391400 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.411072 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.432091 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.452348 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.470421 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.476531 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f1a38ee6-33eb-4c68-955a-1253ef95d412-default-certificate\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.491665 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.501980 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f1a38ee6-33eb-4c68-955a-1253ef95d412-stats-auth\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.510580 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.520684 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f1a38ee6-33eb-4c68-955a-1253ef95d412-metrics-certs\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.530707 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.535242 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1a38ee6-33eb-4c68-955a-1253ef95d412-service-ca-bundle\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.550510 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.571633 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.591350 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.604883 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/3ae5b49d-5291-467c-ae2f-e69d8368dc1f-srv-cert\") pod \"catalog-operator-68c6474976-r4sgs\" (UID: \"3ae5b49d-5291-467c-ae2f-e69d8368dc1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.610538 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.614649 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/3ae5b49d-5291-467c-ae2f-e69d8368dc1f-profile-collector-cert\") pod \"catalog-operator-68c6474976-r4sgs\" (UID: \"3ae5b49d-5291-467c-ae2f-e69d8368dc1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.621745 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe84cd00-e72e-446a-981d-5f7c0e9304af-secret-volume\") pod \"collect-profiles-29494800-d6wvl\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.631000 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.650966 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.674578 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.682880 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8f623a10-e56f-42f3-8d77-a1b6f083712a-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-bj49k\" (UID: \"8f623a10-e56f-42f3-8d77-a1b6f083712a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bj49k" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.692252 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.732257 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.752196 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.769527 4840 request.go:700] Waited for 1.009245253s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&limit=500&resourceVersion=0 Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.771456 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.791899 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.811147 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.831682 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.851651 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.871222 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.892361 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.912168 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.923104 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/dcfac6d9-7960-4572-b7bc-ded6694a3733-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-9qw27\" (UID: \"dcfac6d9-7960-4572-b7bc-ded6694a3733\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.935983 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 12:06:58 crc kubenswrapper[4840]: E0129 12:06:58.944899 4840 secret.go:188] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 29 12:06:58 crc kubenswrapper[4840]: E0129 12:06:58.945072 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1efab996-1c86-4d07-882c-45ee5f49ffe6-proxy-tls podName:1efab996-1c86-4d07-882c-45ee5f49ffe6 nodeName:}" failed. No retries permitted until 2026-01-29 12:06:59.44503562 +0000 UTC m=+151.108015513 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/1efab996-1c86-4d07-882c-45ee5f49ffe6-proxy-tls") pod "machine-config-controller-84d6567774-pnlbm" (UID: "1efab996-1c86-4d07-882c-45ee5f49ffe6") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:06:58 crc kubenswrapper[4840]: E0129 12:06:58.945358 4840 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:06:58 crc kubenswrapper[4840]: E0129 12:06:58.945418 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fe84cd00-e72e-446a-981d-5f7c0e9304af-config-volume podName:fe84cd00-e72e-446a-981d-5f7c0e9304af nodeName:}" failed. No retries permitted until 2026-01-29 12:06:59.445408122 +0000 UTC m=+151.108388025 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/fe84cd00-e72e-446a-981d-5f7c0e9304af-config-volume") pod "collect-profiles-29494800-d6wvl" (UID: "fe84cd00-e72e-446a-981d-5f7c0e9304af") : failed to sync configmap cache: timed out waiting for the condition Jan 29 12:06:58 crc kubenswrapper[4840]: E0129 12:06:58.947093 4840 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 29 12:06:58 crc kubenswrapper[4840]: E0129 12:06:58.947306 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-signing-cabundle podName:1a255a6c-daa4-4b82-b7bb-6d8768feb78c nodeName:}" failed. No retries permitted until 2026-01-29 12:06:59.4472496 +0000 UTC m=+151.110229673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-signing-cabundle") pod "service-ca-9c57cc56f-799dp" (UID: "1a255a6c-daa4-4b82-b7bb-6d8768feb78c") : failed to sync configmap cache: timed out waiting for the condition Jan 29 12:06:58 crc kubenswrapper[4840]: E0129 12:06:58.948160 4840 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 29 12:06:58 crc kubenswrapper[4840]: E0129 12:06:58.948366 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-signing-key podName:1a255a6c-daa4-4b82-b7bb-6d8768feb78c nodeName:}" failed. No retries permitted until 2026-01-29 12:06:59.448335044 +0000 UTC m=+151.111314937 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-signing-key") pod "service-ca-9c57cc56f-799dp" (UID: "1a255a6c-daa4-4b82-b7bb-6d8768feb78c") : failed to sync secret cache: timed out waiting for the condition Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.951004 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.971896 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 12:06:58 crc kubenswrapper[4840]: I0129 12:06:58.992492 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.012254 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.032032 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.052339 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.071683 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.092653 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.112478 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.131725 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.171655 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.194419 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.229828 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd8fh\" (UniqueName: \"kubernetes.io/projected/405b2371-c32c-4697-8c78-35c6a19a8b7a-kube-api-access-fd8fh\") pod \"cluster-image-registry-operator-dc59b4c8b-wc8x4\" (UID: \"405b2371-c32c-4697-8c78-35c6a19a8b7a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.254335 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fb75\" (UniqueName: \"kubernetes.io/projected/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-kube-api-access-6fb75\") pod \"controller-manager-879f6c89f-bg94b\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.272904 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.274638 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7crbn\" (UniqueName: \"kubernetes.io/projected/c960c9b6-a14e-45f6-9adb-a3944ee2c575-kube-api-access-7crbn\") pod \"authentication-operator-69f744f599-rjjwm\" (UID: \"c960c9b6-a14e-45f6-9adb-a3944ee2c575\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.291994 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.311792 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.332216 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.352736 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.372547 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.390721 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.392907 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.411728 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.439403 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.451469 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.464754 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe84cd00-e72e-446a-981d-5f7c0e9304af-config-volume\") pod \"collect-profiles-29494800-d6wvl\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.465152 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-signing-key\") pod \"service-ca-9c57cc56f-799dp\" (UID: \"1a255a6c-daa4-4b82-b7bb-6d8768feb78c\") " pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.465429 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1efab996-1c86-4d07-882c-45ee5f49ffe6-proxy-tls\") pod \"machine-config-controller-84d6567774-pnlbm\" (UID: \"1efab996-1c86-4d07-882c-45ee5f49ffe6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.465604 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-signing-cabundle\") pod \"service-ca-9c57cc56f-799dp\" (UID: \"1a255a6c-daa4-4b82-b7bb-6d8768feb78c\") " pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.467347 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-signing-cabundle\") pod \"service-ca-9c57cc56f-799dp\" (UID: \"1a255a6c-daa4-4b82-b7bb-6d8768feb78c\") " pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.467886 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe84cd00-e72e-446a-981d-5f7c0e9304af-config-volume\") pod \"collect-profiles-29494800-d6wvl\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.469023 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1efab996-1c86-4d07-882c-45ee5f49ffe6-proxy-tls\") pod \"machine-config-controller-84d6567774-pnlbm\" (UID: \"1efab996-1c86-4d07-882c-45ee5f49ffe6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.469023 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-signing-key\") pod \"service-ca-9c57cc56f-799dp\" (UID: \"1a255a6c-daa4-4b82-b7bb-6d8768feb78c\") " pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.492031 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.492605 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/405b2371-c32c-4697-8c78-35c6a19a8b7a-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-wc8x4\" (UID: \"405b2371-c32c-4697-8c78-35c6a19a8b7a\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.511313 4840 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.515922 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.532092 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.545963 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.551278 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.572909 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.592127 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.594891 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bg94b"] Jan 29 12:06:59 crc kubenswrapper[4840]: W0129 12:06:59.610102 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc4198ee_d503_4b4d_81bf_f8ce73e9de18.slice/crio-7fb7e1b949ee3b2720dadc6f8a93723fab80a6c9e82381e3352cd0e4bedfdb13 WatchSource:0}: Error finding container 7fb7e1b949ee3b2720dadc6f8a93723fab80a6c9e82381e3352cd0e4bedfdb13: Status 404 returned error can't find the container with id 7fb7e1b949ee3b2720dadc6f8a93723fab80a6c9e82381e3352cd0e4bedfdb13 Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.613770 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.632913 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.650899 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.672053 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.694540 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.711449 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.731386 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.746820 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4"] Jan 29 12:06:59 crc kubenswrapper[4840]: W0129 12:06:59.750373 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod405b2371_c32c_4697_8c78_35c6a19a8b7a.slice/crio-6973be9d417136548191b1ab6427e922710ef81e4ae6324715c1d5807d873d18 WatchSource:0}: Error finding container 6973be9d417136548191b1ab6427e922710ef81e4ae6324715c1d5807d873d18: Status 404 returned error can't find the container with id 6973be9d417136548191b1ab6427e922710ef81e4ae6324715c1d5807d873d18 Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.758366 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rjjwm"] Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.778534 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbf7z\" (UniqueName: \"kubernetes.io/projected/db1a668d-d1f8-476c-846c-63fec292a9db-kube-api-access-qbf7z\") pod \"dns-operator-744455d44c-hxjgh\" (UID: \"db1a668d-d1f8-476c-846c-63fec292a9db\") " pod="openshift-dns-operator/dns-operator-744455d44c-hxjgh" Jan 29 12:06:59 crc kubenswrapper[4840]: W0129 12:06:59.785404 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc960c9b6_a14e_45f6_9adb_a3944ee2c575.slice/crio-15e85334bc5054665559839885e11e04747d010ba716bc644cf0d8ada3b2a62d WatchSource:0}: Error finding container 15e85334bc5054665559839885e11e04747d010ba716bc644cf0d8ada3b2a62d: Status 404 returned error can't find the container with id 15e85334bc5054665559839885e11e04747d010ba716bc644cf0d8ada3b2a62d Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.788611 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wpvz\" (UniqueName: \"kubernetes.io/projected/6465231e-0bca-475f-a603-b55c8c37d810-kube-api-access-7wpvz\") pod \"cluster-samples-operator-665b6dd947-r7skm\" (UID: \"6465231e-0bca-475f-a603-b55c8c37d810\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.790109 4840 request.go:700] Waited for 1.84780893s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/serviceaccounts/openshift-config-operator/token Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.808561 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqxht\" (UniqueName: \"kubernetes.io/projected/17039ecb-77ce-4ee6-b6e1-dba1aebc2c53-kube-api-access-lqxht\") pod \"openshift-config-operator-7777fb866f-gl7sc\" (UID: \"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.825350 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ggnqs\" (UniqueName: \"kubernetes.io/projected/619b4a63-b1aa-4d9f-a662-043696d19d1e-kube-api-access-ggnqs\") pod \"openshift-apiserver-operator-796bbdcf4f-n9v9c\" (UID: \"619b4a63-b1aa-4d9f-a662-043696d19d1e\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.848276 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj2pz\" (UniqueName: \"kubernetes.io/projected/3ae5b49d-5291-467c-ae2f-e69d8368dc1f-kube-api-access-wj2pz\") pod \"catalog-operator-68c6474976-r4sgs\" (UID: \"3ae5b49d-5291-467c-ae2f-e69d8368dc1f\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.872908 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9kvc\" (UniqueName: \"kubernetes.io/projected/083790b8-20fc-4566-b7c9-e5eb39e22b8a-kube-api-access-s9kvc\") pod \"ingress-operator-5b745b69d9-d6bqw\" (UID: \"083790b8-20fc-4566-b7c9-e5eb39e22b8a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.887111 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qwth\" (UniqueName: \"kubernetes.io/projected/b0e9c5ce-97d8-4eed-a28c-aba69429bcfd-kube-api-access-6qwth\") pod \"openshift-controller-manager-operator-756b6f6bc6-k8jvz\" (UID: \"b0e9c5ce-97d8-4eed-a28c-aba69429bcfd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.906807 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbksv\" (UniqueName: \"kubernetes.io/projected/b7da4209-9141-464e-a112-51470c4785d6-kube-api-access-vbksv\") pod \"oauth-openshift-558db77b4-thmqb\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.922533 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.929132 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqdsf\" (UniqueName: \"kubernetes.io/projected/03b827e0-ff80-4021-9b29-72935f8fe30b-kube-api-access-hqdsf\") pod \"route-controller-manager-6576b87f9c-8pndg\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.930235 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.949755 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnm8k\" (UniqueName: \"kubernetes.io/projected/1a255a6c-daa4-4b82-b7bb-6d8768feb78c-kube-api-access-tnm8k\") pod \"service-ca-9c57cc56f-799dp\" (UID: \"1a255a6c-daa4-4b82-b7bb-6d8768feb78c\") " pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.961275 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.967318 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.968852 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/567d236c-6d37-488c-ace0-b8986046e68b-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-2r86v\" (UID: \"567d236c-6d37-488c-ace0-b8986046e68b\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.985403 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:06:59 crc kubenswrapper[4840]: I0129 12:06:59.988738 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsnb8\" (UniqueName: \"kubernetes.io/projected/df2082a0-52e3-4557-bdbf-9f5f654f00b4-kube-api-access-tsnb8\") pod \"downloads-7954f5f757-ksbxx\" (UID: \"df2082a0-52e3-4557-bdbf-9f5f654f00b4\") " pod="openshift-console/downloads-7954f5f757-ksbxx" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.001502 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.010228 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz8l2\" (UniqueName: \"kubernetes.io/projected/663d1816-5db6-4cd5-9ed4-1e8353228748-kube-api-access-nz8l2\") pod \"etcd-operator-b45778765-8pbcl\" (UID: \"663d1816-5db6-4cd5-9ed4-1e8353228748\") " pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.012495 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.020459 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-hxjgh" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.032821 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xklxv\" (UniqueName: \"kubernetes.io/projected/83d6878b-0209-421c-bff8-2e02c478e338-kube-api-access-xklxv\") pod \"migrator-59844c95c7-q4xmw\" (UID: \"83d6878b-0209-421c-bff8-2e02c478e338\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q4xmw" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.042832 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.050973 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7707ef9-e15f-44ab-8008-2126af490048-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-xqhk2\" (UID: \"e7707ef9-e15f-44ab-8008-2126af490048\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.056034 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" event={"ID":"405b2371-c32c-4697-8c78-35c6a19a8b7a","Type":"ContainerStarted","Data":"9a53d635f70900b5a710642f079bb2e26436730a1bbad6a81d29976e722229a4"} Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.056110 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" event={"ID":"405b2371-c32c-4697-8c78-35c6a19a8b7a","Type":"ContainerStarted","Data":"6973be9d417136548191b1ab6427e922710ef81e4ae6324715c1d5807d873d18"} Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.066201 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" event={"ID":"bc4198ee-d503-4b4d-81bf-f8ce73e9de18","Type":"ContainerStarted","Data":"24d46f7df412f907f5e34ebd46430c8c30373de0baaaf6723cfcf8f1ab6be320"} Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.066243 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" event={"ID":"bc4198ee-d503-4b4d-81bf-f8ce73e9de18","Type":"ContainerStarted","Data":"7fb7e1b949ee3b2720dadc6f8a93723fab80a6c9e82381e3352cd0e4bedfdb13"} Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.066262 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.069407 4840 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bg94b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.069470 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" podUID="bc4198ee-d503-4b4d-81bf-f8ce73e9de18" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.071127 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" event={"ID":"c960c9b6-a14e-45f6-9adb-a3944ee2c575","Type":"ContainerStarted","Data":"255369e95b21b8133605c0da2c69fa174fafbb85feaa9162fa3b19b8994ed766"} Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.071159 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" event={"ID":"c960c9b6-a14e-45f6-9adb-a3944ee2c575","Type":"ContainerStarted","Data":"15e85334bc5054665559839885e11e04747d010ba716bc644cf0d8ada3b2a62d"} Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.072181 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnw4t\" (UniqueName: \"kubernetes.io/projected/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-kube-api-access-cnw4t\") pod \"console-f9d7485db-dltrs\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.096016 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.096209 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxwr2\" (UniqueName: \"kubernetes.io/projected/e11e87ad-33de-4aeb-9742-e70801a6d526-kube-api-access-jxwr2\") pod \"apiserver-76f77b778f-98jrd\" (UID: \"e11e87ad-33de-4aeb-9742-e70801a6d526\") " pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.109701 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs7fk\" (UniqueName: \"kubernetes.io/projected/dcfac6d9-7960-4572-b7bc-ded6694a3733-kube-api-access-xs7fk\") pod \"package-server-manager-789f6589d5-9qw27\" (UID: \"dcfac6d9-7960-4572-b7bc-ded6694a3733\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.129378 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4chl\" (UniqueName: \"kubernetes.io/projected/1efab996-1c86-4d07-882c-45ee5f49ffe6-kube-api-access-w4chl\") pod \"machine-config-controller-84d6567774-pnlbm\" (UID: \"1efab996-1c86-4d07-882c-45ee5f49ffe6\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.129718 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.154260 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk9bg\" (UniqueName: \"kubernetes.io/projected/8f623a10-e56f-42f3-8d77-a1b6f083712a-kube-api-access-fk9bg\") pod \"multus-admission-controller-857f4d67dd-bj49k\" (UID: \"8f623a10-e56f-42f3-8d77-a1b6f083712a\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-bj49k" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.164369 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q4xmw" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.172755 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq9m9\" (UniqueName: \"kubernetes.io/projected/fe84cd00-e72e-446a-981d-5f7c0e9304af-kube-api-access-zq9m9\") pod \"collect-profiles-29494800-d6wvl\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.176503 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.181258 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-ksbxx" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.184651 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.191714 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-799dp" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.198962 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.199366 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/083790b8-20fc-4566-b7c9-e5eb39e22b8a-bound-sa-token\") pod \"ingress-operator-5b745b69d9-d6bqw\" (UID: \"083790b8-20fc-4566-b7c9-e5eb39e22b8a\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.208822 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.209987 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h7qb\" (UniqueName: \"kubernetes.io/projected/8d6ee34e-1da3-4bf3-b084-15e2be838d1b-kube-api-access-8h7qb\") pod \"machine-approver-56656f9798-wcprw\" (UID: \"8d6ee34e-1da3-4bf3-b084-15e2be838d1b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.229987 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.231978 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm"] Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.251707 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jp8jv\" (UniqueName: \"kubernetes.io/projected/6b0b281a-a337-4ff1-b8e5-3760e3af5b20-kube-api-access-jp8jv\") pod \"console-operator-58897d9998-gj6zv\" (UID: \"6b0b281a-a337-4ff1-b8e5-3760e3af5b20\") " pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.270423 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c"] Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.273771 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w48xk\" (UniqueName: \"kubernetes.io/projected/7d571f03-5043-4b23-bfe9-7d82584ef243-kube-api-access-w48xk\") pod \"apiserver-7bbb656c7d-kc2rh\" (UID: \"7d571f03-5043-4b23-bfe9-7d82584ef243\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.275717 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.276306 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dkf7\" (UniqueName: \"kubernetes.io/projected/03d4e817-3cad-4efb-ad16-dafdc1c56c8b-kube-api-access-2dkf7\") pod \"machine-api-operator-5694c8668f-l64hx\" (UID: \"03d4e817-3cad-4efb-ad16-dafdc1c56c8b\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.297814 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg"] Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.302330 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.316004 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz"] Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.319768 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntchp\" (UniqueName: \"kubernetes.io/projected/f1a38ee6-33eb-4c68-955a-1253ef95d412-kube-api-access-ntchp\") pod \"router-default-5444994796-dz9sd\" (UID: \"f1a38ee6-33eb-4c68-955a-1253ef95d412\") " pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:07:00 crc kubenswrapper[4840]: W0129 12:07:00.330047 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod619b4a63_b1aa_4d9f_a662_043696d19d1e.slice/crio-310a98fcdc7547046efbaba6b88a9bd29b61fe9e16d02c610f7d411b1b2e1aeb WatchSource:0}: Error finding container 310a98fcdc7547046efbaba6b88a9bd29b61fe9e16d02c610f7d411b1b2e1aeb: Status 404 returned error can't find the container with id 310a98fcdc7547046efbaba6b88a9bd29b61fe9e16d02c610f7d411b1b2e1aeb Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.331642 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" Jan 29 12:07:00 crc kubenswrapper[4840]: W0129 12:07:00.380383 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb0e9c5ce_97d8_4eed_a28c_aba69429bcfd.slice/crio-85c7819049bf780ed7f990a42e44ccef0d3dff56f27d38ada768b2cf2768ef5e WatchSource:0}: Error finding container 85c7819049bf780ed7f990a42e44ccef0d3dff56f27d38ada768b2cf2768ef5e: Status 404 returned error can't find the container with id 85c7819049bf780ed7f990a42e44ccef0d3dff56f27d38ada768b2cf2768ef5e Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.381253 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e-srv-cert\") pod \"olm-operator-6b444d44fb-x4chr\" (UID: \"290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.381716 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd7fb866-2377-4aef-89e8-9fc306b80acd-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kx5s9\" (UID: \"fd7fb866-2377-4aef-89e8-9fc306b80acd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.381759 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dx2\" (UniqueName: \"kubernetes.io/projected/57eab084-b8aa-4678-b6fc-30f97fe7b52b-kube-api-access-x6dx2\") pod \"control-plane-machine-set-operator-78cbb6b69f-94shd\" (UID: \"57eab084-b8aa-4678-b6fc-30f97fe7b52b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.381788 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5fqb\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-kube-api-access-b5fqb\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.381871 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-bound-sa-token\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.381918 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/424513c6-89ae-4755-b7e7-8bf503a91cb2-apiservice-cert\") pod \"packageserver-d55dfcdfc-lcmfc\" (UID: \"424513c6-89ae-4755-b7e7-8bf503a91cb2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.381963 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksbpn\" (UniqueName: \"kubernetes.io/projected/3e3abb29-6a37-42c3-b49c-81f916f621e9-kube-api-access-ksbpn\") pod \"kube-storage-version-migrator-operator-b67b599dd-nl4fg\" (UID: \"3e3abb29-6a37-42c3-b49c-81f916f621e9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.381988 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382046 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z22cq\" (UniqueName: \"kubernetes.io/projected/8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc-kube-api-access-z22cq\") pod \"machine-config-operator-74547568cd-bknzs\" (UID: \"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382086 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e3abb29-6a37-42c3-b49c-81f916f621e9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nl4fg\" (UID: \"3e3abb29-6a37-42c3-b49c-81f916f621e9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382111 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bknzs\" (UID: \"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382130 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw9kk\" (UniqueName: \"kubernetes.io/projected/290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e-kube-api-access-xw9kk\") pod \"olm-operator-6b444d44fb-x4chr\" (UID: \"290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382152 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7fb866-2377-4aef-89e8-9fc306b80acd-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kx5s9\" (UID: \"fd7fb866-2377-4aef-89e8-9fc306b80acd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382169 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/424513c6-89ae-4755-b7e7-8bf503a91cb2-webhook-cert\") pod \"packageserver-d55dfcdfc-lcmfc\" (UID: \"424513c6-89ae-4755-b7e7-8bf503a91cb2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382192 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01215522-37e6-4461-91d1-f695896d6ede-registry-certificates\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382239 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/57eab084-b8aa-4678-b6fc-30f97fe7b52b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-94shd\" (UID: \"57eab084-b8aa-4678-b6fc-30f97fe7b52b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382306 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/424513c6-89ae-4755-b7e7-8bf503a91cb2-tmpfs\") pod \"packageserver-d55dfcdfc-lcmfc\" (UID: \"424513c6-89ae-4755-b7e7-8bf503a91cb2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382333 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01215522-37e6-4461-91d1-f695896d6ede-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382362 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-registry-tls\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382412 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e3abb29-6a37-42c3-b49c-81f916f621e9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nl4fg\" (UID: \"3e3abb29-6a37-42c3-b49c-81f916f621e9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382452 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7fb866-2377-4aef-89e8-9fc306b80acd-config\") pod \"kube-controller-manager-operator-78b949d7b-kx5s9\" (UID: \"fd7fb866-2377-4aef-89e8-9fc306b80acd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382472 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc-proxy-tls\") pod \"machine-config-operator-74547568cd-bknzs\" (UID: \"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382514 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc-images\") pod \"machine-config-operator-74547568cd-bknzs\" (UID: \"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382539 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01215522-37e6-4461-91d1-f695896d6ede-trusted-ca\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382566 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01215522-37e6-4461-91d1-f695896d6ede-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382592 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctzvc\" (UniqueName: \"kubernetes.io/projected/424513c6-89ae-4755-b7e7-8bf503a91cb2-kube-api-access-ctzvc\") pod \"packageserver-d55dfcdfc-lcmfc\" (UID: \"424513c6-89ae-4755-b7e7-8bf503a91cb2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.382614 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x4chr\" (UID: \"290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:07:00 crc kubenswrapper[4840]: E0129 12:07:00.382715 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:00.882694089 +0000 UTC m=+152.545674162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.420861 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.444426 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-bj49k" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.458064 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.483918 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:00 crc kubenswrapper[4840]: E0129 12:07:00.487808 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:00.987767658 +0000 UTC m=+152.650747551 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.488198 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7fb866-2377-4aef-89e8-9fc306b80acd-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kx5s9\" (UID: \"fd7fb866-2377-4aef-89e8-9fc306b80acd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.488301 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/424513c6-89ae-4755-b7e7-8bf503a91cb2-webhook-cert\") pod \"packageserver-d55dfcdfc-lcmfc\" (UID: \"424513c6-89ae-4755-b7e7-8bf503a91cb2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.488440 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01215522-37e6-4461-91d1-f695896d6ede-registry-certificates\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.488531 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/57eab084-b8aa-4678-b6fc-30f97fe7b52b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-94shd\" (UID: \"57eab084-b8aa-4678-b6fc-30f97fe7b52b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.488572 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmtz8\" (UniqueName: \"kubernetes.io/projected/660d278f-da23-4033-82de-88c42ef375ed-kube-api-access-cmtz8\") pod \"marketplace-operator-79b997595-cwqw5\" (UID: \"660d278f-da23-4033-82de-88c42ef375ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.488682 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a0a68ea-c1c8-4574-bdb7-7661ca9554a7-cert\") pod \"ingress-canary-ndtmk\" (UID: \"0a0a68ea-c1c8-4574-bdb7-7661ca9554a7\") " pod="openshift-ingress-canary/ingress-canary-ndtmk" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.488762 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/424513c6-89ae-4755-b7e7-8bf503a91cb2-tmpfs\") pod \"packageserver-d55dfcdfc-lcmfc\" (UID: \"424513c6-89ae-4755-b7e7-8bf503a91cb2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.488871 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-registry-tls\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.488905 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01215522-37e6-4461-91d1-f695896d6ede-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489035 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10ed6060-8201-4bfb-b91e-8fe75ec3c408-config-volume\") pod \"dns-default-x9hv2\" (UID: \"10ed6060-8201-4bfb-b91e-8fe75ec3c408\") " pod="openshift-dns/dns-default-x9hv2" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489097 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-mountpoint-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489125 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-registration-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489181 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e3abb29-6a37-42c3-b49c-81f916f621e9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nl4fg\" (UID: \"3e3abb29-6a37-42c3-b49c-81f916f621e9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489236 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc-proxy-tls\") pod \"machine-config-operator-74547568cd-bknzs\" (UID: \"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489329 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7fb866-2377-4aef-89e8-9fc306b80acd-config\") pod \"kube-controller-manager-operator-78b949d7b-kx5s9\" (UID: \"fd7fb866-2377-4aef-89e8-9fc306b80acd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489362 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc-images\") pod \"machine-config-operator-74547568cd-bknzs\" (UID: \"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489394 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01215522-37e6-4461-91d1-f695896d6ede-trusted-ca\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489430 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01215522-37e6-4461-91d1-f695896d6ede-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489463 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctzvc\" (UniqueName: \"kubernetes.io/projected/424513c6-89ae-4755-b7e7-8bf503a91cb2-kube-api-access-ctzvc\") pod \"packageserver-d55dfcdfc-lcmfc\" (UID: \"424513c6-89ae-4755-b7e7-8bf503a91cb2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489516 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x4chr\" (UID: \"290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489547 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk2cf\" (UniqueName: \"kubernetes.io/projected/ae5da369-d1bd-4e91-857f-1220180f575f-kube-api-access-fk2cf\") pod \"service-ca-operator-777779d784-qdhws\" (UID: \"ae5da369-d1bd-4e91-857f-1220180f575f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489704 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxrdn\" (UniqueName: \"kubernetes.io/projected/a9b060e9-18c0-425e-aa43-ad3c281de751-kube-api-access-nxrdn\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489788 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-socket-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489824 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/660d278f-da23-4033-82de-88c42ef375ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cwqw5\" (UID: \"660d278f-da23-4033-82de-88c42ef375ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.489972 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e-srv-cert\") pod \"olm-operator-6b444d44fb-x4chr\" (UID: \"290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.490155 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd7fb866-2377-4aef-89e8-9fc306b80acd-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kx5s9\" (UID: \"fd7fb866-2377-4aef-89e8-9fc306b80acd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.490195 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6dx2\" (UniqueName: \"kubernetes.io/projected/57eab084-b8aa-4678-b6fc-30f97fe7b52b-kube-api-access-x6dx2\") pod \"control-plane-machine-set-operator-78cbb6b69f-94shd\" (UID: \"57eab084-b8aa-4678-b6fc-30f97fe7b52b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.490289 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5fqb\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-kube-api-access-b5fqb\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.490501 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95r5s\" (UniqueName: \"kubernetes.io/projected/0a0a68ea-c1c8-4574-bdb7-7661ca9554a7-kube-api-access-95r5s\") pod \"ingress-canary-ndtmk\" (UID: \"0a0a68ea-c1c8-4574-bdb7-7661ca9554a7\") " pod="openshift-ingress-canary/ingress-canary-ndtmk" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.490578 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f15546b9-34c6-47f8-b985-a30fbccd7b7c-node-bootstrap-token\") pod \"machine-config-server-m87r7\" (UID: \"f15546b9-34c6-47f8-b985-a30fbccd7b7c\") " pod="openshift-machine-config-operator/machine-config-server-m87r7" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.490604 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/660d278f-da23-4033-82de-88c42ef375ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cwqw5\" (UID: \"660d278f-da23-4033-82de-88c42ef375ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.490726 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-bound-sa-token\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.490803 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/424513c6-89ae-4755-b7e7-8bf503a91cb2-apiservice-cert\") pod \"packageserver-d55dfcdfc-lcmfc\" (UID: \"424513c6-89ae-4755-b7e7-8bf503a91cb2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.490878 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksbpn\" (UniqueName: \"kubernetes.io/projected/3e3abb29-6a37-42c3-b49c-81f916f621e9-kube-api-access-ksbpn\") pod \"kube-storage-version-migrator-operator-b67b599dd-nl4fg\" (UID: \"3e3abb29-6a37-42c3-b49c-81f916f621e9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.490934 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.493377 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01215522-37e6-4461-91d1-f695896d6ede-ca-trust-extracted\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.495033 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2mf5\" (UniqueName: \"kubernetes.io/projected/f15546b9-34c6-47f8-b985-a30fbccd7b7c-kube-api-access-n2mf5\") pod \"machine-config-server-m87r7\" (UID: \"f15546b9-34c6-47f8-b985-a30fbccd7b7c\") " pod="openshift-machine-config-operator/machine-config-server-m87r7" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.495088 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10ed6060-8201-4bfb-b91e-8fe75ec3c408-metrics-tls\") pod \"dns-default-x9hv2\" (UID: \"10ed6060-8201-4bfb-b91e-8fe75ec3c408\") " pod="openshift-dns/dns-default-x9hv2" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.495158 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-csi-data-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.495202 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msp4f\" (UniqueName: \"kubernetes.io/projected/10ed6060-8201-4bfb-b91e-8fe75ec3c408-kube-api-access-msp4f\") pod \"dns-default-x9hv2\" (UID: \"10ed6060-8201-4bfb-b91e-8fe75ec3c408\") " pod="openshift-dns/dns-default-x9hv2" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.495326 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z22cq\" (UniqueName: \"kubernetes.io/projected/8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc-kube-api-access-z22cq\") pod \"machine-config-operator-74547568cd-bknzs\" (UID: \"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.495349 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5da369-d1bd-4e91-857f-1220180f575f-serving-cert\") pod \"service-ca-operator-777779d784-qdhws\" (UID: \"ae5da369-d1bd-4e91-857f-1220180f575f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.495375 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5da369-d1bd-4e91-857f-1220180f575f-config\") pod \"service-ca-operator-777779d784-qdhws\" (UID: \"ae5da369-d1bd-4e91-857f-1220180f575f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.495448 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e3abb29-6a37-42c3-b49c-81f916f621e9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nl4fg\" (UID: \"3e3abb29-6a37-42c3-b49c-81f916f621e9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.495490 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bknzs\" (UID: \"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.495517 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f15546b9-34c6-47f8-b985-a30fbccd7b7c-certs\") pod \"machine-config-server-m87r7\" (UID: \"f15546b9-34c6-47f8-b985-a30fbccd7b7c\") " pod="openshift-machine-config-operator/machine-config-server-m87r7" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.495538 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-plugins-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.495627 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw9kk\" (UniqueName: \"kubernetes.io/projected/290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e-kube-api-access-xw9kk\") pod \"olm-operator-6b444d44fb-x4chr\" (UID: \"290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.500292 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e3abb29-6a37-42c3-b49c-81f916f621e9-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-nl4fg\" (UID: \"3e3abb29-6a37-42c3-b49c-81f916f621e9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.502759 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd7fb866-2377-4aef-89e8-9fc306b80acd-config\") pod \"kube-controller-manager-operator-78b949d7b-kx5s9\" (UID: \"fd7fb866-2377-4aef-89e8-9fc306b80acd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.505512 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc-images\") pod \"machine-config-operator-74547568cd-bknzs\" (UID: \"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: E0129 12:07:00.507112 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:01.007085337 +0000 UTC m=+152.670065230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.512666 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01215522-37e6-4461-91d1-f695896d6ede-registry-certificates\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.513028 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/424513c6-89ae-4755-b7e7-8bf503a91cb2-tmpfs\") pod \"packageserver-d55dfcdfc-lcmfc\" (UID: \"424513c6-89ae-4755-b7e7-8bf503a91cb2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.514228 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bknzs\" (UID: \"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.514662 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01215522-37e6-4461-91d1-f695896d6ede-trusted-ca\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.529255 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc-proxy-tls\") pod \"machine-config-operator-74547568cd-bknzs\" (UID: \"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.534439 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e-srv-cert\") pod \"olm-operator-6b444d44fb-x4chr\" (UID: \"290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.534829 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd7fb866-2377-4aef-89e8-9fc306b80acd-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-kx5s9\" (UID: \"fd7fb866-2377-4aef-89e8-9fc306b80acd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.540585 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.546109 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/424513c6-89ae-4755-b7e7-8bf503a91cb2-webhook-cert\") pod \"packageserver-d55dfcdfc-lcmfc\" (UID: \"424513c6-89ae-4755-b7e7-8bf503a91cb2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.546565 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/424513c6-89ae-4755-b7e7-8bf503a91cb2-apiservice-cert\") pod \"packageserver-d55dfcdfc-lcmfc\" (UID: \"424513c6-89ae-4755-b7e7-8bf503a91cb2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.546615 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e3abb29-6a37-42c3-b49c-81f916f621e9-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-nl4fg\" (UID: \"3e3abb29-6a37-42c3-b49c-81f916f621e9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.546999 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/57eab084-b8aa-4678-b6fc-30f97fe7b52b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-94shd\" (UID: \"57eab084-b8aa-4678-b6fc-30f97fe7b52b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.547545 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e-profile-collector-cert\") pod \"olm-operator-6b444d44fb-x4chr\" (UID: \"290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.549167 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01215522-37e6-4461-91d1-f695896d6ede-installation-pull-secrets\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.565674 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-registry-tls\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.573468 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw9kk\" (UniqueName: \"kubernetes.io/projected/290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e-kube-api-access-xw9kk\") pod \"olm-operator-6b444d44fb-x4chr\" (UID: \"290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.578389 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5fqb\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-kube-api-access-b5fqb\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.579813 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6dx2\" (UniqueName: \"kubernetes.io/projected/57eab084-b8aa-4678-b6fc-30f97fe7b52b-kube-api-access-x6dx2\") pod \"control-plane-machine-set-operator-78cbb6b69f-94shd\" (UID: \"57eab084-b8aa-4678-b6fc-30f97fe7b52b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.580891 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc"] Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.582971 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-thmqb"] Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.596707 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597156 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/660d278f-da23-4033-82de-88c42ef375ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cwqw5\" (UID: \"660d278f-da23-4033-82de-88c42ef375ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597222 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2mf5\" (UniqueName: \"kubernetes.io/projected/f15546b9-34c6-47f8-b985-a30fbccd7b7c-kube-api-access-n2mf5\") pod \"machine-config-server-m87r7\" (UID: \"f15546b9-34c6-47f8-b985-a30fbccd7b7c\") " pod="openshift-machine-config-operator/machine-config-server-m87r7" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597240 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10ed6060-8201-4bfb-b91e-8fe75ec3c408-metrics-tls\") pod \"dns-default-x9hv2\" (UID: \"10ed6060-8201-4bfb-b91e-8fe75ec3c408\") " pod="openshift-dns/dns-default-x9hv2" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597261 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-csi-data-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597276 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msp4f\" (UniqueName: \"kubernetes.io/projected/10ed6060-8201-4bfb-b91e-8fe75ec3c408-kube-api-access-msp4f\") pod \"dns-default-x9hv2\" (UID: \"10ed6060-8201-4bfb-b91e-8fe75ec3c408\") " pod="openshift-dns/dns-default-x9hv2" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597305 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5da369-d1bd-4e91-857f-1220180f575f-serving-cert\") pod \"service-ca-operator-777779d784-qdhws\" (UID: \"ae5da369-d1bd-4e91-857f-1220180f575f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597321 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5da369-d1bd-4e91-857f-1220180f575f-config\") pod \"service-ca-operator-777779d784-qdhws\" (UID: \"ae5da369-d1bd-4e91-857f-1220180f575f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597341 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f15546b9-34c6-47f8-b985-a30fbccd7b7c-certs\") pod \"machine-config-server-m87r7\" (UID: \"f15546b9-34c6-47f8-b985-a30fbccd7b7c\") " pod="openshift-machine-config-operator/machine-config-server-m87r7" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597361 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-plugins-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597391 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmtz8\" (UniqueName: \"kubernetes.io/projected/660d278f-da23-4033-82de-88c42ef375ed-kube-api-access-cmtz8\") pod \"marketplace-operator-79b997595-cwqw5\" (UID: \"660d278f-da23-4033-82de-88c42ef375ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597410 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a0a68ea-c1c8-4574-bdb7-7661ca9554a7-cert\") pod \"ingress-canary-ndtmk\" (UID: \"0a0a68ea-c1c8-4574-bdb7-7661ca9554a7\") " pod="openshift-ingress-canary/ingress-canary-ndtmk" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597429 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10ed6060-8201-4bfb-b91e-8fe75ec3c408-config-volume\") pod \"dns-default-x9hv2\" (UID: \"10ed6060-8201-4bfb-b91e-8fe75ec3c408\") " pod="openshift-dns/dns-default-x9hv2" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597444 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-mountpoint-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597461 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-registration-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597490 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk2cf\" (UniqueName: \"kubernetes.io/projected/ae5da369-d1bd-4e91-857f-1220180f575f-kube-api-access-fk2cf\") pod \"service-ca-operator-777779d784-qdhws\" (UID: \"ae5da369-d1bd-4e91-857f-1220180f575f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597509 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxrdn\" (UniqueName: \"kubernetes.io/projected/a9b060e9-18c0-425e-aa43-ad3c281de751-kube-api-access-nxrdn\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597523 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/660d278f-da23-4033-82de-88c42ef375ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cwqw5\" (UID: \"660d278f-da23-4033-82de-88c42ef375ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597563 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-socket-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597603 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f15546b9-34c6-47f8-b985-a30fbccd7b7c-node-bootstrap-token\") pod \"machine-config-server-m87r7\" (UID: \"f15546b9-34c6-47f8-b985-a30fbccd7b7c\") " pod="openshift-machine-config-operator/machine-config-server-m87r7" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.597624 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95r5s\" (UniqueName: \"kubernetes.io/projected/0a0a68ea-c1c8-4574-bdb7-7661ca9554a7-kube-api-access-95r5s\") pod \"ingress-canary-ndtmk\" (UID: \"0a0a68ea-c1c8-4574-bdb7-7661ca9554a7\") " pod="openshift-ingress-canary/ingress-canary-ndtmk" Jan 29 12:07:00 crc kubenswrapper[4840]: E0129 12:07:00.597972 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:01.09795003 +0000 UTC m=+152.760929923 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.606102 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-socket-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.606737 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-csi-data-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.607578 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-mountpoint-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.607798 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ae5da369-d1bd-4e91-857f-1220180f575f-config\") pod \"service-ca-operator-777779d784-qdhws\" (UID: \"ae5da369-d1bd-4e91-857f-1220180f575f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.608097 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/10ed6060-8201-4bfb-b91e-8fe75ec3c408-config-volume\") pod \"dns-default-x9hv2\" (UID: \"10ed6060-8201-4bfb-b91e-8fe75ec3c408\") " pod="openshift-dns/dns-default-x9hv2" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.608125 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-plugins-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.609596 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a9b060e9-18c0-425e-aa43-ad3c281de751-registration-dir\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.610337 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-bound-sa-token\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.625758 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0a0a68ea-c1c8-4574-bdb7-7661ca9554a7-cert\") pod \"ingress-canary-ndtmk\" (UID: \"0a0a68ea-c1c8-4574-bdb7-7661ca9554a7\") " pod="openshift-ingress-canary/ingress-canary-ndtmk" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.627274 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z22cq\" (UniqueName: \"kubernetes.io/projected/8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc-kube-api-access-z22cq\") pod \"machine-config-operator-74547568cd-bknzs\" (UID: \"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.628418 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/660d278f-da23-4033-82de-88c42ef375ed-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-cwqw5\" (UID: \"660d278f-da23-4033-82de-88c42ef375ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.629143 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/660d278f-da23-4033-82de-88c42ef375ed-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-cwqw5\" (UID: \"660d278f-da23-4033-82de-88c42ef375ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.629582 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae5da369-d1bd-4e91-857f-1220180f575f-serving-cert\") pod \"service-ca-operator-777779d784-qdhws\" (UID: \"ae5da369-d1bd-4e91-857f-1220180f575f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.629927 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/10ed6060-8201-4bfb-b91e-8fe75ec3c408-metrics-tls\") pod \"dns-default-x9hv2\" (UID: \"10ed6060-8201-4bfb-b91e-8fe75ec3c408\") " pod="openshift-dns/dns-default-x9hv2" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.630015 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f15546b9-34c6-47f8-b985-a30fbccd7b7c-node-bootstrap-token\") pod \"machine-config-server-m87r7\" (UID: \"f15546b9-34c6-47f8-b985-a30fbccd7b7c\") " pod="openshift-machine-config-operator/machine-config-server-m87r7" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.645734 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksbpn\" (UniqueName: \"kubernetes.io/projected/3e3abb29-6a37-42c3-b49c-81f916f621e9-kube-api-access-ksbpn\") pod \"kube-storage-version-migrator-operator-b67b599dd-nl4fg\" (UID: \"3e3abb29-6a37-42c3-b49c-81f916f621e9\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.648166 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f15546b9-34c6-47f8-b985-a30fbccd7b7c-certs\") pod \"machine-config-server-m87r7\" (UID: \"f15546b9-34c6-47f8-b985-a30fbccd7b7c\") " pod="openshift-machine-config-operator/machine-config-server-m87r7" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.655323 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fd7fb866-2377-4aef-89e8-9fc306b80acd-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-kx5s9\" (UID: \"fd7fb866-2377-4aef-89e8-9fc306b80acd\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.702327 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: E0129 12:07:00.702903 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:01.202885945 +0000 UTC m=+152.865865838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.706669 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctzvc\" (UniqueName: \"kubernetes.io/projected/424513c6-89ae-4755-b7e7-8bf503a91cb2-kube-api-access-ctzvc\") pod \"packageserver-d55dfcdfc-lcmfc\" (UID: \"424513c6-89ae-4755-b7e7-8bf503a91cb2\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.709831 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.711708 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.738536 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95r5s\" (UniqueName: \"kubernetes.io/projected/0a0a68ea-c1c8-4574-bdb7-7661ca9554a7-kube-api-access-95r5s\") pod \"ingress-canary-ndtmk\" (UID: \"0a0a68ea-c1c8-4574-bdb7-7661ca9554a7\") " pod="openshift-ingress-canary/ingress-canary-ndtmk" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.747882 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.751681 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2mf5\" (UniqueName: \"kubernetes.io/projected/f15546b9-34c6-47f8-b985-a30fbccd7b7c-kube-api-access-n2mf5\") pod \"machine-config-server-m87r7\" (UID: \"f15546b9-34c6-47f8-b985-a30fbccd7b7c\") " pod="openshift-machine-config-operator/machine-config-server-m87r7" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.758436 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.766649 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk2cf\" (UniqueName: \"kubernetes.io/projected/ae5da369-d1bd-4e91-857f-1220180f575f-kube-api-access-fk2cf\") pod \"service-ca-operator-777779d784-qdhws\" (UID: \"ae5da369-d1bd-4e91-857f-1220180f575f\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.772228 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmtz8\" (UniqueName: \"kubernetes.io/projected/660d278f-da23-4033-82de-88c42ef375ed-kube-api-access-cmtz8\") pod \"marketplace-operator-79b997595-cwqw5\" (UID: \"660d278f-da23-4033-82de-88c42ef375ed\") " pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.795331 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v"] Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.799082 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msp4f\" (UniqueName: \"kubernetes.io/projected/10ed6060-8201-4bfb-b91e-8fe75ec3c408-kube-api-access-msp4f\") pod \"dns-default-x9hv2\" (UID: \"10ed6060-8201-4bfb-b91e-8fe75ec3c408\") " pod="openshift-dns/dns-default-x9hv2" Jan 29 12:07:00 crc kubenswrapper[4840]: W0129 12:07:00.800423 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17039ecb_77ce_4ee6_b6e1_dba1aebc2c53.slice/crio-124ae2525e62bee3cf3376d98aba43c4a041b038c60d83a2f12aae17945aa19b WatchSource:0}: Error finding container 124ae2525e62bee3cf3376d98aba43c4a041b038c60d83a2f12aae17945aa19b: Status 404 returned error can't find the container with id 124ae2525e62bee3cf3376d98aba43c4a041b038c60d83a2f12aae17945aa19b Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.802973 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:00 crc kubenswrapper[4840]: E0129 12:07:00.803414 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:01.303395272 +0000 UTC m=+152.966375165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.832761 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.840352 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.847627 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.854543 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxrdn\" (UniqueName: \"kubernetes.io/projected/a9b060e9-18c0-425e-aa43-ad3c281de751-kube-api-access-nxrdn\") pod \"csi-hostpathplugin-78t8x\" (UID: \"a9b060e9-18c0-425e-aa43-ad3c281de751\") " pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.856032 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:07:00 crc kubenswrapper[4840]: W0129 12:07:00.863144 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7da4209_9141_464e_a112_51470c4785d6.slice/crio-10d88972322b879da31e64de14d6e898d1936a52d205d57fb634d5d512956113 WatchSource:0}: Error finding container 10d88972322b879da31e64de14d6e898d1936a52d205d57fb634d5d512956113: Status 404 returned error can't find the container with id 10d88972322b879da31e64de14d6e898d1936a52d205d57fb634d5d512956113 Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.871526 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-78t8x" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.886702 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x9hv2" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.892863 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ndtmk" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.903984 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-m87r7" Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.905836 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:00 crc kubenswrapper[4840]: E0129 12:07:00.906294 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:01.406267612 +0000 UTC m=+153.069247685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.951487 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2"] Jan 29 12:07:00 crc kubenswrapper[4840]: I0129 12:07:00.971603 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-hxjgh"] Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.007846 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.008092 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:01.508046139 +0000 UTC m=+153.171026032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.008908 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.009380 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:01.509369351 +0000 UTC m=+153.172349244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.101377 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" event={"ID":"b7da4209-9141-464e-a112-51470c4785d6","Type":"ContainerStarted","Data":"10d88972322b879da31e64de14d6e898d1936a52d205d57fb634d5d512956113"} Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.112427 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.112694 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:01.612566982 +0000 UTC m=+153.275546875 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.114027 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.114376 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:01.614368558 +0000 UTC m=+153.277348451 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.119951 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" event={"ID":"567d236c-6d37-488c-ace0-b8986046e68b","Type":"ContainerStarted","Data":"a10d755f4706074ff440c3b5850614eb73bf5e2a3542eb342200aa90aed8e8e7"} Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.123442 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m87r7" event={"ID":"f15546b9-34c6-47f8-b985-a30fbccd7b7c","Type":"ContainerStarted","Data":"b7ab8002879fa4dad668efef14e9b3e22e56128302b34f8043e06b9c532fe631"} Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.126447 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" event={"ID":"619b4a63-b1aa-4d9f-a662-043696d19d1e","Type":"ContainerStarted","Data":"310a98fcdc7547046efbaba6b88a9bd29b61fe9e16d02c610f7d411b1b2e1aeb"} Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.144478 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" event={"ID":"03b827e0-ff80-4021-9b29-72935f8fe30b","Type":"ContainerStarted","Data":"d3d61c2960c16fd52a16e18f9f0cad71db5161440ec99088d64c96feaf059fd0"} Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.144551 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" event={"ID":"03b827e0-ff80-4021-9b29-72935f8fe30b","Type":"ContainerStarted","Data":"1896ca7532b53da1d380e91ccab8a032e9d949aca6fc91f5b9d14e266aaeb665"} Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.145202 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.150049 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dz9sd" event={"ID":"f1a38ee6-33eb-4c68-955a-1253ef95d412","Type":"ContainerStarted","Data":"242c8b26e90b27a710f6d67695951dfdc54f337216bc9595b54aeb7cfbd6833e"} Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.153661 4840 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8pndg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.153757 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" podUID="03b827e0-ff80-4021-9b29-72935f8fe30b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.159174 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" event={"ID":"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53","Type":"ContainerStarted","Data":"124ae2525e62bee3cf3376d98aba43c4a041b038c60d83a2f12aae17945aa19b"} Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.170955 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" event={"ID":"b0e9c5ce-97d8-4eed-a28c-aba69429bcfd","Type":"ContainerStarted","Data":"85c7819049bf780ed7f990a42e44ccef0d3dff56f27d38ada768b2cf2768ef5e"} Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.176353 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm" event={"ID":"6465231e-0bca-475f-a603-b55c8c37d810","Type":"ContainerStarted","Data":"abeb6fd6774488e0e2f2ec9ea4fd9f2806ff37eeeb8aa9902d0fee42b1d20bdc"} Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.181707 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" event={"ID":"8d6ee34e-1da3-4bf3-b084-15e2be838d1b","Type":"ContainerStarted","Data":"7d5e2fd6c689c4ba7673741d0d3f02d1afaa02b4e9a49e30c10d1ae550083975"} Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.183635 4840 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bg94b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.183699 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" podUID="bc4198ee-d503-4b4d-81bf-f8ce73e9de18" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.231117 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.233396 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:01.733358257 +0000 UTC m=+153.396338150 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.334092 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.338094 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:01.838072426 +0000 UTC m=+153.501052319 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.370149 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rjjwm" podStartSLOduration=131.370119525 podStartE2EDuration="2m11.370119525s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:01.33345942 +0000 UTC m=+152.996439323" watchObservedRunningTime="2026-01-29 12:07:01.370119525 +0000 UTC m=+153.033099418" Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.410898 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-wc8x4" podStartSLOduration=130.410873349 podStartE2EDuration="2m10.410873349s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:01.391490018 +0000 UTC m=+153.054469921" watchObservedRunningTime="2026-01-29 12:07:01.410873349 +0000 UTC m=+153.073853242" Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.458914 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.459277 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:01.959260723 +0000 UTC m=+153.622240616 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.562468 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.563013 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.0629741 +0000 UTC m=+153.725953993 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.663315 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.663535 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.163498727 +0000 UTC m=+153.826478620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.663625 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.664212 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.164188739 +0000 UTC m=+153.827168632 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.764441 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.764739 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.264722825 +0000 UTC m=+153.927702718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.764813 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.765172 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.26516442 +0000 UTC m=+153.928144313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.865818 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.867561 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.367537715 +0000 UTC m=+154.030517608 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:01 crc kubenswrapper[4840]: I0129 12:07:01.969160 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:01 crc kubenswrapper[4840]: E0129 12:07:01.969695 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.469670122 +0000 UTC m=+154.132650015 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.071318 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:02 crc kubenswrapper[4840]: E0129 12:07:02.071755 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.571732477 +0000 UTC m=+154.234712370 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.172911 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:02 crc kubenswrapper[4840]: E0129 12:07:02.173360 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.673344618 +0000 UTC m=+154.336324511 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.187425 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" event={"ID":"8d6ee34e-1da3-4bf3-b084-15e2be838d1b","Type":"ContainerStarted","Data":"b4556a981bfd78c4eeb7bc039cdb40c8bc4b1fc13d5348e112f3cc11dd0dab47"} Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.195180 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" podStartSLOduration=132.195159766 podStartE2EDuration="2m12.195159766s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:02.166848874 +0000 UTC m=+153.829828767" watchObservedRunningTime="2026-01-29 12:07:02.195159766 +0000 UTC m=+153.858139659" Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.200450 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" event={"ID":"b0e9c5ce-97d8-4eed-a28c-aba69429bcfd","Type":"ContainerStarted","Data":"95b15c686eb99ca5869f902dae5dfd5d64566082dd7d52dd266a21608e141c5d"} Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.204127 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-m87r7" event={"ID":"f15546b9-34c6-47f8-b985-a30fbccd7b7c","Type":"ContainerStarted","Data":"d6ee67b3a34fe23bc98d2315c4b9ce723a4d28b5a3a194f49cf7311a99b316a0"} Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.209693 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm" event={"ID":"6465231e-0bca-475f-a603-b55c8c37d810","Type":"ContainerStarted","Data":"9576c13f5882bdfa86711238949b5f2f6cb52cdb1dc03b4d624f621fa645964b"} Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.211361 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-dz9sd" event={"ID":"f1a38ee6-33eb-4c68-955a-1253ef95d412","Type":"ContainerStarted","Data":"cc4c01ac0419d9c7aff5204e1cd8c69d22d103aa7cd89e55a67c09a66515c081"} Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.224446 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" event={"ID":"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53","Type":"ContainerStarted","Data":"853bae2e1110540e7a77c5624256536a324094195307faa869c425676f547823"} Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.229640 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" event={"ID":"e7707ef9-e15f-44ab-8008-2126af490048","Type":"ContainerStarted","Data":"19b0f46e6af34108003f9901e11eb130a69575c6590519c4b842d130deffc1f4"} Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.232946 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hxjgh" event={"ID":"db1a668d-d1f8-476c-846c-63fec292a9db","Type":"ContainerStarted","Data":"17086a2e9d0c6779ebf6f71ccc31018fb505b0a091020c8c57020fe5845a006d"} Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.241730 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" event={"ID":"619b4a63-b1aa-4d9f-a662-043696d19d1e","Type":"ContainerStarted","Data":"68cbb6a876b7bc405c9fbb8c3d087b351f0fbec3df37f7e0af1d59c0bfde08bc"} Jan 29 12:07:02 crc kubenswrapper[4840]: E0129 12:07:02.279792 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.779776561 +0000 UTC m=+154.442756454 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.279818 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.280107 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:02 crc kubenswrapper[4840]: E0129 12:07:02.281978 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.78195116 +0000 UTC m=+154.444931263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.333230 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.381420 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:02 crc kubenswrapper[4840]: E0129 12:07:02.383270 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.88324019 +0000 UTC m=+154.546220083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.428465 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.453768 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.453939 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.489861 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:02 crc kubenswrapper[4840]: E0129 12:07:02.491242 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:02.991223442 +0000 UTC m=+154.654203335 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.530473 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.539146 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-dz9sd" podStartSLOduration=131.53911754 podStartE2EDuration="2m11.53911754s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:02.537857932 +0000 UTC m=+154.200837835" watchObservedRunningTime="2026-01-29 12:07:02.53911754 +0000 UTC m=+154.202097433" Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.592171 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:02 crc kubenswrapper[4840]: E0129 12:07:02.592766 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:03.09274283 +0000 UTC m=+154.755722723 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.605770 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-k8jvz" podStartSLOduration=132.605742079 podStartE2EDuration="2m12.605742079s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:02.571712608 +0000 UTC m=+154.234692501" watchObservedRunningTime="2026-01-29 12:07:02.605742079 +0000 UTC m=+154.268721972" Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.672875 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" podStartSLOduration=131.672849854 podStartE2EDuration="2m11.672849854s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:02.661194207 +0000 UTC m=+154.324174140" watchObservedRunningTime="2026-01-29 12:07:02.672849854 +0000 UTC m=+154.335829747" Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.673171 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-m87r7" podStartSLOduration=5.6731641029999995 podStartE2EDuration="5.673164103s" podCreationTimestamp="2026-01-29 12:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:02.626509624 +0000 UTC m=+154.289489527" watchObservedRunningTime="2026-01-29 12:07:02.673164103 +0000 UTC m=+154.336143996" Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.697300 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:02 crc kubenswrapper[4840]: E0129 12:07:02.698157 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:03.19813455 +0000 UTC m=+154.861114453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.720637 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-ksbxx"] Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.780393 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl"] Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.783631 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-n9v9c" podStartSLOduration=132.783600572 podStartE2EDuration="2m12.783600572s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:02.751640545 +0000 UTC m=+154.414620448" watchObservedRunningTime="2026-01-29 12:07:02.783600572 +0000 UTC m=+154.446580465" Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.801935 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:02 crc kubenswrapper[4840]: E0129 12:07:02.802268 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:03.30224827 +0000 UTC m=+154.965228173 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.838555 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-98jrd"] Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.882367 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27"] Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.895180 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-dltrs"] Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.903990 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.904346 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-gj6zv"] Jan 29 12:07:02 crc kubenswrapper[4840]: E0129 12:07:02.904512 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:03.40449483 +0000 UTC m=+155.067474714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.943918 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm"] Jan 29 12:07:02 crc kubenswrapper[4840]: I0129 12:07:02.962086 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-8pbcl"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:02.999083 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.014284 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.014815 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:03.514783495 +0000 UTC m=+155.177763388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: W0129 12:07:03.051587 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4832ef07_2202_4b5b_9ed5_70bc621ea8dd.slice/crio-c1a7d91731d1ee8018dc7b6949e1b2ee6046161730526158a836cb4f3217be49 WatchSource:0}: Error finding container c1a7d91731d1ee8018dc7b6949e1b2ee6046161730526158a836cb4f3217be49: Status 404 returned error can't find the container with id c1a7d91731d1ee8018dc7b6949e1b2ee6046161730526158a836cb4f3217be49 Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.115973 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.117196 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:03.61717432 +0000 UTC m=+155.280154213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.140834 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-l64hx"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.144381 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qdhws"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.180825 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr"] Jan 29 12:07:03 crc kubenswrapper[4840]: W0129 12:07:03.226385 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae5da369_d1bd_4e91_857f_1220180f575f.slice/crio-1d01e83ff520ae1e023f75e80abcfc93ac928809c17f7f8c13e495302b8edfa2 WatchSource:0}: Error finding container 1d01e83ff520ae1e023f75e80abcfc93ac928809c17f7f8c13e495302b8edfa2: Status 404 returned error can't find the container with id 1d01e83ff520ae1e023f75e80abcfc93ac928809c17f7f8c13e495302b8edfa2 Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.227754 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.228369 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:03.728305112 +0000 UTC m=+155.391285015 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.228635 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.229351 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:03.729341054 +0000 UTC m=+155.392320947 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.231584 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-bj49k"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.235760 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x9hv2"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.258567 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-q4xmw"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.260546 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" event={"ID":"ae5da369-d1bd-4e91-857f-1220180f575f","Type":"ContainerStarted","Data":"1d01e83ff520ae1e023f75e80abcfc93ac928809c17f7f8c13e495302b8edfa2"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.287340 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" event={"ID":"8d6ee34e-1da3-4bf3-b084-15e2be838d1b","Type":"ContainerStarted","Data":"0123dd11bb267351899b6d15ef99a67c4dda30f8e55d9cab56dd13c14e192c88"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.309081 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.311876 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ndtmk"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.321810 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9"] Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.333453 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:03.833398802 +0000 UTC m=+155.496378715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.349294 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.333205 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.354802 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.355488 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:03.855466817 +0000 UTC m=+155.518446710 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.369360 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" event={"ID":"dcfac6d9-7960-4572-b7bc-ded6694a3733","Type":"ContainerStarted","Data":"6d16cf71d4a63b09466c61e2a7e5dba482a17487b0953b9e28ce612e83089c32"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.371497 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wcprw" podStartSLOduration=133.371464681 podStartE2EDuration="2m13.371464681s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:03.332655069 +0000 UTC m=+154.995634972" watchObservedRunningTime="2026-01-29 12:07:03.371464681 +0000 UTC m=+155.034444574" Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.380578 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" event={"ID":"3ae5b49d-5291-467c-ae2f-e69d8368dc1f","Type":"ContainerStarted","Data":"da954cf92cf8fa94d45b4c38289f7ee798483e1ad33bd8d6dbcfb73ba21342c5"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.386462 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" event={"ID":"e7707ef9-e15f-44ab-8008-2126af490048","Type":"ContainerStarted","Data":"dbef4dde60eb02bf1cec44085b6dc1b8ac0550c0fb0c50182a602cc691ec33e4"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.396940 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-799dp"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.409147 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hxjgh" event={"ID":"db1a668d-d1f8-476c-846c-63fec292a9db","Type":"ContainerStarted","Data":"7c9cc9c86d3d6b9c39e6aae36ce4fe680110bfe4eaf40133fe4dbba919a80eca"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.409210 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-hxjgh" event={"ID":"db1a668d-d1f8-476c-846c-63fec292a9db","Type":"ContainerStarted","Data":"8a579a49bd148cdf7f3eee466e1a050709ab36c862b78b3db13c0cc2eb8c5e0e"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.423466 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" event={"ID":"03d4e817-3cad-4efb-ad16-dafdc1c56c8b","Type":"ContainerStarted","Data":"65076d8b07b6d149e84937ae5ef4ed74e6d37929668ab52515ff7254f6ddf7ab"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.427882 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm" event={"ID":"6465231e-0bca-475f-a603-b55c8c37d810","Type":"ContainerStarted","Data":"3df03dd718e0f1bc59cb892cd4fa07d7d2541453f4ab10da93053aee322838bf"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.430298 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:03 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:03 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:03 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.430391 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.442160 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.444449 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" event={"ID":"fe84cd00-e72e-446a-981d-5f7c0e9304af","Type":"ContainerStarted","Data":"6432fbbedcd59d39e979d194574472c75276b4593cbf3de68cb30aa0221b48d2"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.444468 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-xqhk2" podStartSLOduration=132.44444649 podStartE2EDuration="2m12.44444649s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:03.437565984 +0000 UTC m=+155.100545887" watchObservedRunningTime="2026-01-29 12:07:03.44444649 +0000 UTC m=+155.107426383" Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.454464 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" event={"ID":"663d1816-5db6-4cd5-9ed4-1e8353228748","Type":"ContainerStarted","Data":"f4d31c2b074ed94dae5b70dcbc67e09d5f8ec60816374ae0ec79b37b53142a0a"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.456729 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.457086 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.458406 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:03.95838706 +0000 UTC m=+155.621366953 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.463594 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gj6zv" event={"ID":"6b0b281a-a337-4ff1-b8e5-3760e3af5b20","Type":"ContainerStarted","Data":"c31540eaa1e455cf602aebdd46d2bb9eb351cf42e4026a9073e99b1a04f0ebd4"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.468169 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cwqw5"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.484621 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-r7skm" podStartSLOduration=133.484601356 podStartE2EDuration="2m13.484601356s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:03.476228562 +0000 UTC m=+155.139208465" watchObservedRunningTime="2026-01-29 12:07:03.484601356 +0000 UTC m=+155.147581249" Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.494630 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-78t8x"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.499493 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" event={"ID":"1efab996-1c86-4d07-882c-45ee5f49ffe6","Type":"ContainerStarted","Data":"8bc7e079fc42f093558aa6cf42110469f7c332aa5f8cde8df98732abea8062f1"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.508010 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-hxjgh" podStartSLOduration=133.507981932 podStartE2EDuration="2m13.507981932s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:03.507538128 +0000 UTC m=+155.170518021" watchObservedRunningTime="2026-01-29 12:07:03.507981932 +0000 UTC m=+155.170961845" Jan 29 12:07:03 crc kubenswrapper[4840]: W0129 12:07:03.510543 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod083790b8_20fc_4566_b7c9_e5eb39e22b8a.slice/crio-a188b9ccefe18f8444eaa3646cc8f2f368aad424e35e338a45613e35bb05e3b2 WatchSource:0}: Error finding container a188b9ccefe18f8444eaa3646cc8f2f368aad424e35e338a45613e35bb05e3b2: Status 404 returned error can't find the container with id a188b9ccefe18f8444eaa3646cc8f2f368aad424e35e338a45613e35bb05e3b2 Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.510752 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" event={"ID":"b7da4209-9141-464e-a112-51470c4785d6","Type":"ContainerStarted","Data":"4cad9999d0d5c88d0454dfb30ed55d43e9560adafdb59e2df034472e5f36935d"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.511356 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.523182 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" event={"ID":"567d236c-6d37-488c-ace0-b8986046e68b","Type":"ContainerStarted","Data":"61ea0ebcd99b6564dd1ce14dc5369086a082b7c88b0e6e91ffe9ecb6be21ccc2"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.537124 4840 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-thmqb container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" start-of-body= Jan 29 12:07:03 crc kubenswrapper[4840]: W0129 12:07:03.537597 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd7fb866_2377_4aef_89e8_9fc306b80acd.slice/crio-87ddfc4e2c54940a9cd0c762ddb0f8986a5f108e88924fbd5a877d447c084e89 WatchSource:0}: Error finding container 87ddfc4e2c54940a9cd0c762ddb0f8986a5f108e88924fbd5a877d447c084e89: Status 404 returned error can't find the container with id 87ddfc4e2c54940a9cd0c762ddb0f8986a5f108e88924fbd5a877d447c084e89 Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.537615 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" podUID="b7da4209-9141-464e-a112-51470c4785d6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.15:6443/healthz\": dial tcp 10.217.0.15:6443: connect: connection refused" Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.545395 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.550644 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg"] Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.558500 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.559958 4840 generic.go:334] "Generic (PLEG): container finished" podID="17039ecb-77ce-4ee6-b6e1-dba1aebc2c53" containerID="853bae2e1110540e7a77c5624256536a324094195307faa869c425676f547823" exitCode=0 Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.560128 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" event={"ID":"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53","Type":"ContainerDied","Data":"853bae2e1110540e7a77c5624256536a324094195307faa869c425676f547823"} Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.560536 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.060512517 +0000 UTC m=+155.723492410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.561616 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" podStartSLOduration=133.561584811 podStartE2EDuration="2m13.561584811s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:03.560437724 +0000 UTC m=+155.223417637" watchObservedRunningTime="2026-01-29 12:07:03.561584811 +0000 UTC m=+155.224564704" Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.615791 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ksbxx" event={"ID":"df2082a0-52e3-4557-bdbf-9f5f654f00b4","Type":"ContainerStarted","Data":"b9cce28e689d629bf05ca0787b25e153a220faddf28a98d66fcfe0ab2f40787b"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.618251 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ksbxx" Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.625485 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-98jrd" event={"ID":"e11e87ad-33de-4aeb-9742-e70801a6d526","Type":"ContainerStarted","Data":"062b1e3824ac99974a7c3d98b1c272a5be8c0324370befa3a5f60e05ae41f00b"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.628493 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dltrs" event={"ID":"4832ef07-2202-4b5b-9ed5-70bc621ea8dd","Type":"ContainerStarted","Data":"c1a7d91731d1ee8018dc7b6949e1b2ee6046161730526158a836cb4f3217be49"} Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.635245 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-2r86v" podStartSLOduration=132.63522698 podStartE2EDuration="2m12.63522698s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:03.59268566 +0000 UTC m=+155.255665553" watchObservedRunningTime="2026-01-29 12:07:03.63522698 +0000 UTC m=+155.298206873" Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.659474 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.659664 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.15963819 +0000 UTC m=+155.822618083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.660416 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.660551 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.669536 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.692203 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.192166114 +0000 UTC m=+155.855146017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.770597 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.771992 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.271960128 +0000 UTC m=+155.934940021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.873117 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.873700 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.373677343 +0000 UTC m=+156.036657236 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.974135 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.974329 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.474299251 +0000 UTC m=+156.137279144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:03 crc kubenswrapper[4840]: I0129 12:07:03.974506 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:03 crc kubenswrapper[4840]: E0129 12:07:03.974872 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.474854179 +0000 UTC m=+156.137834072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.078771 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:04 crc kubenswrapper[4840]: E0129 12:07:04.079098 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.579063512 +0000 UTC m=+156.242043415 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.079224 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:04 crc kubenswrapper[4840]: E0129 12:07:04.079704 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.579694182 +0000 UTC m=+156.242674075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.187073 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:04 crc kubenswrapper[4840]: E0129 12:07:04.188235 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.68820548 +0000 UTC m=+156.351185383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.290240 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:04 crc kubenswrapper[4840]: E0129 12:07:04.290670 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.790655987 +0000 UTC m=+156.453635880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.391297 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:04 crc kubenswrapper[4840]: E0129 12:07:04.391454 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.891426752 +0000 UTC m=+156.554406645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.391668 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:04 crc kubenswrapper[4840]: E0129 12:07:04.392144 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.892133215 +0000 UTC m=+156.555113108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.426894 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:04 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:04 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:04 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.426986 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.498326 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:04 crc kubenswrapper[4840]: E0129 12:07:04.498926 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:04.998895108 +0000 UTC m=+156.661875001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.602042 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:04 crc kubenswrapper[4840]: E0129 12:07:04.602557 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:05.102534382 +0000 UTC m=+156.765514275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.652903 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" event={"ID":"fe84cd00-e72e-446a-981d-5f7c0e9304af","Type":"ContainerStarted","Data":"e4adac4ebc544b3753286665b6bda3eda7f0d707c5c94fc11ede0e37086f2f4f"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.660598 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q4xmw" event={"ID":"83d6878b-0209-421c-bff8-2e02c478e338","Type":"ContainerStarted","Data":"7c305989f45cbe02510abee5ebe86049b96ab6f01e1d3904e42e976248524198"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.660665 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q4xmw" event={"ID":"83d6878b-0209-421c-bff8-2e02c478e338","Type":"ContainerStarted","Data":"db00d48d3b9b2d190b099fda3a0f5e2ca2f532e68930b74ea2ebb49e272ba511"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.670690 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ksbxx" event={"ID":"df2082a0-52e3-4557-bdbf-9f5f654f00b4","Type":"ContainerStarted","Data":"f45f863522c946ad7ca7628ef8c2835757f95b1e3e736acb3a1201c8c39ad963"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.672176 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.672206 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.674203 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" event={"ID":"fd7fb866-2377-4aef-89e8-9fc306b80acd","Type":"ContainerStarted","Data":"87ddfc4e2c54940a9cd0c762ddb0f8986a5f108e88924fbd5a877d447c084e89"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.693291 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-ksbxx" podStartSLOduration=134.693268381 podStartE2EDuration="2m14.693268381s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:03.693007551 +0000 UTC m=+155.355987444" watchObservedRunningTime="2026-01-29 12:07:04.693268381 +0000 UTC m=+156.356248284" Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.694064 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x9hv2" event={"ID":"10ed6060-8201-4bfb-b91e-8fe75ec3c408","Type":"ContainerStarted","Data":"5daf52065f72f19d41deb507be6f338774cdc56ba745e016dd985687e4f64099"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.703468 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:04 crc kubenswrapper[4840]: E0129 12:07:04.703820 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:05.203806852 +0000 UTC m=+156.866786745 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.726250 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" event={"ID":"290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e","Type":"ContainerStarted","Data":"0c8526623ea469e5675c082281877bab4634d76f4c17487cdb4532cbfa07370d"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.771241 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dltrs" event={"ID":"4832ef07-2202-4b5b-9ed5-70bc621ea8dd","Type":"ContainerStarted","Data":"298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.783150 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" event={"ID":"424513c6-89ae-4755-b7e7-8bf503a91cb2","Type":"ContainerStarted","Data":"283e3492c71f464959d729e8a113ba4e93aa948f5a5d24680fd642f34d35244f"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.810837 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:04 crc kubenswrapper[4840]: E0129 12:07:04.811151 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:05.311139234 +0000 UTC m=+156.974119127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.812451 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" podStartSLOduration=134.812422894 podStartE2EDuration="2m14.812422894s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:04.693871199 +0000 UTC m=+156.356851092" watchObservedRunningTime="2026-01-29 12:07:04.812422894 +0000 UTC m=+156.475402807" Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.812543 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" event={"ID":"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc","Type":"ContainerStarted","Data":"7b17a03d4efa0273c277871d62de17d7e4579842a961c7bf3d69c50d88171d3e"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.816670 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-dltrs" podStartSLOduration=134.816651338 podStartE2EDuration="2m14.816651338s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:04.811839726 +0000 UTC m=+156.474819639" watchObservedRunningTime="2026-01-29 12:07:04.816651338 +0000 UTC m=+156.479631231" Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.829197 4840 csr.go:261] certificate signing request csr-2p7gb is approved, waiting to be issued Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.830750 4840 csr.go:257] certificate signing request csr-2p7gb is issued Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.837079 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-78t8x" event={"ID":"a9b060e9-18c0-425e-aa43-ad3c281de751","Type":"ContainerStarted","Data":"2f61a1f57ad98035bc1cf6524ead2a7b5022e8168b3390e6b21b0a6f2a1ba239"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.912514 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:04 crc kubenswrapper[4840]: E0129 12:07:04.913265 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:05.41324416 +0000 UTC m=+157.076224043 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.954714 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" event={"ID":"3ae5b49d-5291-467c-ae2f-e69d8368dc1f","Type":"ContainerStarted","Data":"0d9c0a98c9e8abc459bcbc33b11c79f59c9c836faa55e4525141535f00640fd6"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.956364 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.957889 4840 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r4sgs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.957928 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" podUID="3ae5b49d-5291-467c-ae2f-e69d8368dc1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.969253 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" event={"ID":"7d571f03-5043-4b23-bfe9-7d82584ef243","Type":"ContainerStarted","Data":"6e38609ae80a603251e51842b9edb35889fd4603ccf6452d020e40f6365cda10"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.971177 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" event={"ID":"1efab996-1c86-4d07-882c-45ee5f49ffe6","Type":"ContainerStarted","Data":"f517ee06a26f93d05953bdc1f8d4a1870e6cf3632a81476f8001592b0a32b705"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.972291 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd" event={"ID":"57eab084-b8aa-4678-b6fc-30f97fe7b52b","Type":"ContainerStarted","Data":"dc4fe2d2b977725eb47bd928a15b1124b48ab05a91aed2e8c660bcb0901806a2"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.972328 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd" event={"ID":"57eab084-b8aa-4678-b6fc-30f97fe7b52b","Type":"ContainerStarted","Data":"d8239abc85f737874d5f145fe0d30589dfa7c147003bc59a9325e65aff919af3"} Jan 29 12:07:04 crc kubenswrapper[4840]: I0129 12:07:04.991703 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" event={"ID":"ae5da369-d1bd-4e91-857f-1220180f575f","Type":"ContainerStarted","Data":"33fbf2391f827ef4fdb46114f0423122334ce426c4cbb8ca9b1ff0158e8f593a"} Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:04.998578 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" podStartSLOduration=133.998558638 podStartE2EDuration="2m13.998558638s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:04.997895858 +0000 UTC m=+156.660875771" watchObservedRunningTime="2026-01-29 12:07:04.998558638 +0000 UTC m=+156.661538531" Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.018202 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.018820 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:05.518804145 +0000 UTC m=+157.181784038 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.070214 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-94shd" podStartSLOduration=134.070188804 podStartE2EDuration="2m14.070188804s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:05.068563023 +0000 UTC m=+156.731542926" watchObservedRunningTime="2026-01-29 12:07:05.070188804 +0000 UTC m=+156.733168697" Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.125986 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.127018 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:05.627003985 +0000 UTC m=+157.289983878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.154782 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" event={"ID":"660d278f-da23-4033-82de-88c42ef375ed","Type":"ContainerStarted","Data":"1a8356ab5e3203f2949834ac88875b11449bbba42ce099a53317c776fc04258b"} Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.192068 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bj49k" event={"ID":"8f623a10-e56f-42f3-8d77-a1b6f083712a","Type":"ContainerStarted","Data":"b0d0691efecab9f980b53c291d6b4be4e4128dfa7e98fe7e43b7b3142bf2969d"} Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.206518 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" event={"ID":"dcfac6d9-7960-4572-b7bc-ded6694a3733","Type":"ContainerStarted","Data":"ac136d631e397a63a6b5d9be7038b5df51cccecfc3262191e460127c17971b92"} Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.220537 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ndtmk" event={"ID":"0a0a68ea-c1c8-4574-bdb7-7661ca9554a7","Type":"ContainerStarted","Data":"2ea79abed049a67a7de3578f3fda69757e465a4d46f79eda0c4f395ca8ffc3cb"} Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.228346 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.228771 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:05.728752699 +0000 UTC m=+157.391732592 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.254372 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" event={"ID":"03d4e817-3cad-4efb-ad16-dafdc1c56c8b","Type":"ContainerStarted","Data":"5710bb40edcd1228778096088d79d9b34948e9786225001318e4aabf36dc8108"} Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.288713 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" event={"ID":"3e3abb29-6a37-42c3-b49c-81f916f621e9","Type":"ContainerStarted","Data":"e115df3bfd07dbc5b31cd0d4b6b8346655f3e1e73f5eaaa466c3e1575f960ba5"} Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.301094 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" event={"ID":"083790b8-20fc-4566-b7c9-e5eb39e22b8a","Type":"ContainerStarted","Data":"a188b9ccefe18f8444eaa3646cc8f2f368aad424e35e338a45613e35bb05e3b2"} Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.303821 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-799dp" event={"ID":"1a255a6c-daa4-4b82-b7bb-6d8768feb78c","Type":"ContainerStarted","Data":"07fc30cd5298f505bbcfd013ae5ab5dfca30bf39860cae991780cdba55213c13"} Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.327642 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-gj6zv" event={"ID":"6b0b281a-a337-4ff1-b8e5-3760e3af5b20","Type":"ContainerStarted","Data":"bbcc338103cbee1b9a22a92f2c7e596c6cc233a208cbbe81bb40175217720c82"} Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.330767 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.332381 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.333071 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:05.833040425 +0000 UTC m=+157.496020478 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.335064 4840 patch_prober.go:28] interesting pod/console-operator-58897d9998-gj6zv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.335132 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gj6zv" podUID="6b0b281a-a337-4ff1-b8e5-3760e3af5b20" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.350257 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qdhws" podStartSLOduration=134.350238397 podStartE2EDuration="2m14.350238397s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:05.100580022 +0000 UTC m=+156.763559915" watchObservedRunningTime="2026-01-29 12:07:05.350238397 +0000 UTC m=+157.013218290" Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.350750 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-gj6zv" podStartSLOduration=135.350743363 podStartE2EDuration="2m15.350743363s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:05.350491584 +0000 UTC m=+157.013471487" watchObservedRunningTime="2026-01-29 12:07:05.350743363 +0000 UTC m=+157.013723256" Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.361676 4840 generic.go:334] "Generic (PLEG): container finished" podID="e11e87ad-33de-4aeb-9742-e70801a6d526" containerID="4ab27fc1d7f7284dee6595fd12933aa3ffbd16ca3449c48e45ad647ba3934c9a" exitCode=0 Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.362787 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-98jrd" event={"ID":"e11e87ad-33de-4aeb-9742-e70801a6d526","Type":"ContainerDied","Data":"4ab27fc1d7f7284dee6595fd12933aa3ffbd16ca3449c48e45ad647ba3934c9a"} Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.383516 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.431482 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:05 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:05 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:05 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.431537 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.434354 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.435832 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:05.935820042 +0000 UTC m=+157.598799935 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.535310 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.539729 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.039686774 +0000 UTC m=+157.702666667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.641518 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.642189 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.142176714 +0000 UTC m=+157.805156597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.742592 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.742793 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.242772023 +0000 UTC m=+157.905751936 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.742915 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.743189 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.243181255 +0000 UTC m=+157.906161148 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.833327 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-29 12:02:04 +0000 UTC, rotation deadline is 2026-11-29 00:16:54.010056941 +0000 UTC Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.833387 4840 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7284h9m48.176672304s for next certificate rotation Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.843839 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.844165 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.344141676 +0000 UTC m=+158.007121569 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.844327 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.844689 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.344673492 +0000 UTC m=+158.007653385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.948630 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.948857 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.448829884 +0000 UTC m=+158.111809777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:05 crc kubenswrapper[4840]: I0129 12:07:05.949061 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:05 crc kubenswrapper[4840]: E0129 12:07:05.949409 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.449400311 +0000 UTC m=+158.112380204 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.049868 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:06 crc kubenswrapper[4840]: E0129 12:07:06.050551 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.550535277 +0000 UTC m=+158.213515170 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.152531 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:06 crc kubenswrapper[4840]: E0129 12:07:06.153083 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.653064068 +0000 UTC m=+158.316043961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.253829 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:06 crc kubenswrapper[4840]: E0129 12:07:06.254624 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.754595086 +0000 UTC m=+158.417574979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.365154 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:06 crc kubenswrapper[4840]: E0129 12:07:06.365626 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.865611043 +0000 UTC m=+158.528590936 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.398808 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" event={"ID":"424513c6-89ae-4755-b7e7-8bf503a91cb2","Type":"ContainerStarted","Data":"b5f72d01fbd57bbdf04845110ed217b9ebabfe0d564967a6c592cb15fa77ed17"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.400565 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.401841 4840 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lcmfc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" start-of-body= Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.401890 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" podUID="424513c6-89ae-4755-b7e7-8bf503a91cb2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.414146 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x9hv2" event={"ID":"10ed6060-8201-4bfb-b91e-8fe75ec3c408","Type":"ContainerStarted","Data":"cb46f4f301b7911b6305284b25448f9c0b3b2844d5f5aab82fd0ecf42eff7379"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.414610 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x9hv2" event={"ID":"10ed6060-8201-4bfb-b91e-8fe75ec3c408","Type":"ContainerStarted","Data":"00a8a5d512521dacd0b8390d33ce84a053a2ca9c6996b731cef23dd3b3a99cc0"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.414932 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-x9hv2" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.420302 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" event={"ID":"660d278f-da23-4033-82de-88c42ef375ed","Type":"ContainerStarted","Data":"60f1e40925d3f4888e5771354e96aa15c213200c60df268c75c8977a182cfad0"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.420741 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.423842 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" event={"ID":"dcfac6d9-7960-4572-b7bc-ded6694a3733","Type":"ContainerStarted","Data":"d49f6dbaf0a83b5aee0053af15bd7e6b1a34edc6bedd8c02fbab9598dc748501"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.424001 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.424407 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" podStartSLOduration=135.424393724 podStartE2EDuration="2m15.424393724s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:06.423310471 +0000 UTC m=+158.086290384" watchObservedRunningTime="2026-01-29 12:07:06.424393724 +0000 UTC m=+158.087373617" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.427844 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:06 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:06 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:06 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.427937 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.430401 4840 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cwqw5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.430446 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.431290 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ndtmk" event={"ID":"0a0a68ea-c1c8-4574-bdb7-7661ca9554a7","Type":"ContainerStarted","Data":"aac3bcc054d39e03d444d96ff7f99e573f57e220e63d570f0763aaf11e54c142"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.435860 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" event={"ID":"3e3abb29-6a37-42c3-b49c-81f916f621e9","Type":"ContainerStarted","Data":"d7857bf30fe4ce0ef01f9cc42ba151d50ec8383ce5d03e52687ab08893dc0cb9"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.446314 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" event={"ID":"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc","Type":"ContainerStarted","Data":"217b5cb0ffcc474338abfcea7c1586b0b15433b2cd08c4776117a2cfad957fb0"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.446373 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" event={"ID":"8a7c82bd-4c7a-4445-8c6a-c5324c2bbebc","Type":"ContainerStarted","Data":"ffbb2d22927c213b4b8d2d5e591a9787f126493e8be6d433fef9e434fbf239b4"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.462379 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" event={"ID":"1efab996-1c86-4d07-882c-45ee5f49ffe6","Type":"ContainerStarted","Data":"45de6e58a60c8f83925472b3b1bbf18d23c79e69d86400edb8685e5581cca769"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.468890 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" event={"ID":"fd7fb866-2377-4aef-89e8-9fc306b80acd","Type":"ContainerStarted","Data":"91d77356d9a0d0e792912fb72ca78c9512d64765aa309ad94bb9def7daf9b991"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.470454 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:06 crc kubenswrapper[4840]: E0129 12:07:06.470981 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:06.970938281 +0000 UTC m=+158.633918174 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.478641 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-799dp" event={"ID":"1a255a6c-daa4-4b82-b7bb-6d8768feb78c","Type":"ContainerStarted","Data":"51936749019f610e44e7f8b97fc3242aa502a64fdcb9f6440faa6d9232bd2885"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.481620 4840 generic.go:334] "Generic (PLEG): container finished" podID="7d571f03-5043-4b23-bfe9-7d82584ef243" containerID="7b63b8a85a5ff2c3976032ee0e358d7c486bc1349eeb1c384c584cfcee437f9e" exitCode=0 Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.481700 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" event={"ID":"7d571f03-5043-4b23-bfe9-7d82584ef243","Type":"ContainerDied","Data":"7b63b8a85a5ff2c3976032ee0e358d7c486bc1349eeb1c384c584cfcee437f9e"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.489050 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" event={"ID":"083790b8-20fc-4566-b7c9-e5eb39e22b8a","Type":"ContainerStarted","Data":"6982f72292c671187931c543de19433b13c7014cd44b859d221a7467b1817588"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.489100 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" event={"ID":"083790b8-20fc-4566-b7c9-e5eb39e22b8a","Type":"ContainerStarted","Data":"7d0ebb63fb02b3712704b4050939976f903aca25227655cbd6f716c296b4a70c"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.497171 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q4xmw" event={"ID":"83d6878b-0209-421c-bff8-2e02c478e338","Type":"ContainerStarted","Data":"f8b0c043ba50d30f9d4e989d96f418ad0757dc7170b5d0a49890c0c402cca1df"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.499099 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" podStartSLOduration=135.499082588 podStartE2EDuration="2m15.499082588s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:06.455402941 +0000 UTC m=+158.118382834" watchObservedRunningTime="2026-01-29 12:07:06.499082588 +0000 UTC m=+158.162062481" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.508494 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-x9hv2" podStartSLOduration=9.508476013 podStartE2EDuration="9.508476013s" podCreationTimestamp="2026-01-29 12:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:06.498735596 +0000 UTC m=+158.161715489" watchObservedRunningTime="2026-01-29 12:07:06.508476013 +0000 UTC m=+158.171455906" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.511389 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" event={"ID":"17039ecb-77ce-4ee6-b6e1-dba1aebc2c53","Type":"ContainerStarted","Data":"d523b779e5b1758b15b14bfe12db2bd4745130dc8210f41c8f28593165f92c02"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.511769 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.525250 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" event={"ID":"03d4e817-3cad-4efb-ad16-dafdc1c56c8b","Type":"ContainerStarted","Data":"d068e5a4b50a0bd0c5ab3e29580bb5d75bd274246b0a2ffca816ebe9caf08920"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.539548 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bknzs" podStartSLOduration=135.539528892 podStartE2EDuration="2m15.539528892s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:06.537816827 +0000 UTC m=+158.200796730" watchObservedRunningTime="2026-01-29 12:07:06.539528892 +0000 UTC m=+158.202508775" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.545090 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" event={"ID":"663d1816-5db6-4cd5-9ed4-1e8353228748","Type":"ContainerStarted","Data":"3b70bcb08d0e7974c5ed1735205154bd935935a731866b0d7acc6da4e7c88f13"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.576020 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.582098 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" event={"ID":"290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e","Type":"ContainerStarted","Data":"3e89692356baf81a78e21ebd0133cb1c5b4d21cfaf00fa0059249b6579b5c997"} Jan 29 12:07:06 crc kubenswrapper[4840]: E0129 12:07:06.587649 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.087622737 +0000 UTC m=+158.750602630 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.596870 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.614838 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-q4xmw" podStartSLOduration=135.614820743 podStartE2EDuration="2m15.614820743s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:06.581720601 +0000 UTC m=+158.244700494" watchObservedRunningTime="2026-01-29 12:07:06.614820743 +0000 UTC m=+158.277800636" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.625018 4840 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-x4chr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.625089 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" podUID="290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.643408 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-98jrd" event={"ID":"e11e87ad-33de-4aeb-9742-e70801a6d526","Type":"ContainerStarted","Data":"863439bd43901c52e76cc8a604adbe14091c87cf7559d3351fcd416193c1c5d8"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.662697 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bj49k" event={"ID":"8f623a10-e56f-42f3-8d77-a1b6f083712a","Type":"ContainerStarted","Data":"871f7a432642a9deca954ed6758d28b146f26dd7098a4ea8a350721c123fbc1e"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.665130 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-bj49k" event={"ID":"8f623a10-e56f-42f3-8d77-a1b6f083712a","Type":"ContainerStarted","Data":"1bff4cfea916ed711c9c77ff9881c5fe2c24e35906e7f0d4835baccd5371bc8b"} Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.664230 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.665322 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.664256 4840 patch_prober.go:28] interesting pod/console-operator-58897d9998-gj6zv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.665873 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gj6zv" podUID="6b0b281a-a337-4ff1-b8e5-3760e3af5b20" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.664729 4840 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-r4sgs container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.666033 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" podUID="3ae5b49d-5291-467c-ae2f-e69d8368dc1f" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.681373 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:06 crc kubenswrapper[4840]: E0129 12:07:06.682800 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.182785105 +0000 UTC m=+158.845764998 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.705001 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-nl4fg" podStartSLOduration=135.704949463 podStartE2EDuration="2m15.704949463s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:06.69531504 +0000 UTC m=+158.358294933" watchObservedRunningTime="2026-01-29 12:07:06.704949463 +0000 UTC m=+158.367929356" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.791234 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:06 crc kubenswrapper[4840]: E0129 12:07:06.792809 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.29279378 +0000 UTC m=+158.955773673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.844084 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-799dp" podStartSLOduration=135.844065885 podStartE2EDuration="2m15.844065885s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:06.772557113 +0000 UTC m=+158.435537006" watchObservedRunningTime="2026-01-29 12:07:06.844065885 +0000 UTC m=+158.507045778" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.892518 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:06 crc kubenswrapper[4840]: E0129 12:07:06.892828 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.392812421 +0000 UTC m=+159.055792314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.928585 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-pnlbm" podStartSLOduration=135.928567157 podStartE2EDuration="2m15.928567157s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:06.905774489 +0000 UTC m=+158.568754382" watchObservedRunningTime="2026-01-29 12:07:06.928567157 +0000 UTC m=+158.591547050" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.930104 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-ndtmk" podStartSLOduration=9.930099516 podStartE2EDuration="9.930099516s" podCreationTimestamp="2026-01-29 12:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:06.844630372 +0000 UTC m=+158.507610285" watchObservedRunningTime="2026-01-29 12:07:06.930099516 +0000 UTC m=+158.593079409" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.969416 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-d6bqw" podStartSLOduration=135.969400804 podStartE2EDuration="2m15.969400804s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:06.968413862 +0000 UTC m=+158.631393755" watchObservedRunningTime="2026-01-29 12:07:06.969400804 +0000 UTC m=+158.632380697" Jan 29 12:07:06 crc kubenswrapper[4840]: I0129 12:07:06.998586 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:06 crc kubenswrapper[4840]: E0129 12:07:06.998956 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.498944224 +0000 UTC m=+159.161924117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.035470 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-kx5s9" podStartSLOduration=136.035453144 podStartE2EDuration="2m16.035453144s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:07.035279059 +0000 UTC m=+158.698258972" watchObservedRunningTime="2026-01-29 12:07:07.035453144 +0000 UTC m=+158.698433037" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.092704 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" podStartSLOduration=136.092688177 podStartE2EDuration="2m16.092688177s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:07.09215887 +0000 UTC m=+158.755138763" watchObservedRunningTime="2026-01-29 12:07:07.092688177 +0000 UTC m=+158.755668070" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.100171 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.100585 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.600569765 +0000 UTC m=+159.263549658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.100658 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.102753 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.602744444 +0000 UTC m=+159.265724337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.136211 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-8pbcl" podStartSLOduration=136.136194728 podStartE2EDuration="2m16.136194728s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:07.134026389 +0000 UTC m=+158.797006292" watchObservedRunningTime="2026-01-29 12:07:07.136194728 +0000 UTC m=+158.799174621" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.188651 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" podStartSLOduration=136.188633059 podStartE2EDuration="2m16.188633059s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:07.165744959 +0000 UTC m=+158.828724852" watchObservedRunningTime="2026-01-29 12:07:07.188633059 +0000 UTC m=+158.851612952" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.191246 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-bj49k" podStartSLOduration=136.191238022 podStartE2EDuration="2m16.191238022s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:07.187455982 +0000 UTC m=+158.850435885" watchObservedRunningTime="2026-01-29 12:07:07.191238022 +0000 UTC m=+158.854217915" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.201675 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.201911 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.701860417 +0000 UTC m=+159.364840320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.202184 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.202836 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.702824736 +0000 UTC m=+159.365804629 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.218368 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" podStartSLOduration=137.218342995 podStartE2EDuration="2m17.218342995s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:07.210645733 +0000 UTC m=+158.873625626" watchObservedRunningTime="2026-01-29 12:07:07.218342995 +0000 UTC m=+158.881322888" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.250427 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-l64hx" podStartSLOduration=136.250398145 podStartE2EDuration="2m16.250398145s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:07.248804145 +0000 UTC m=+158.911784028" watchObservedRunningTime="2026-01-29 12:07:07.250398145 +0000 UTC m=+158.913378038" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.303343 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.303588 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.803542789 +0000 UTC m=+159.466522682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.303640 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.304095 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.804075327 +0000 UTC m=+159.467055210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.404327 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.404656 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.904612383 +0000 UTC m=+159.567592276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.404909 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.405517 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:07.9054734 +0000 UTC m=+159.568453293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.428136 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:07 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:07 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:07 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.428192 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.506077 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.506248 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.006220784 +0000 UTC m=+159.669200677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.506296 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.506756 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.0067342 +0000 UTC m=+159.669714093 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.608012 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.608213 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.108182076 +0000 UTC m=+159.771161969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.608399 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.608811 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.108800695 +0000 UTC m=+159.771780588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.670366 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" event={"ID":"7d571f03-5043-4b23-bfe9-7d82584ef243","Type":"ContainerStarted","Data":"38fc0c845bc420a375c10b40ab61f33f318bfba07f8bc3ce6bb2584152ea2f60"} Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.672163 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-78t8x" event={"ID":"a9b060e9-18c0-425e-aa43-ad3c281de751","Type":"ContainerStarted","Data":"b4df4904cda40e95e86c096219b9699787b8a59778a0792e439ba1693028bbd9"} Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.675148 4840 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-cwqw5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.675178 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-98jrd" event={"ID":"e11e87ad-33de-4aeb-9742-e70801a6d526","Type":"ContainerStarted","Data":"2108d3bdbfda7d3291a306560e124f5ceea977ddb2722c7a9e7984ff224e44f0"} Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.675222 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.676103 4840 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lcmfc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" start-of-body= Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.676143 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" podUID="424513c6-89ae-4755-b7e7-8bf503a91cb2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.676574 4840 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-x4chr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.676617 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" podUID="290414e1-ce93-4aa2-a8d5-5a4ba8b7d61e" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.676631 4840 patch_prober.go:28] interesting pod/console-operator-58897d9998-gj6zv container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.676735 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-gj6zv" podUID="6b0b281a-a337-4ff1-b8e5-3760e3af5b20" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.14:8443/readyz\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.698284 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-r4sgs" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.709975 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.710109 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.210064956 +0000 UTC m=+159.873044849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.710358 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.712707 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.212694068 +0000 UTC m=+159.875673961 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.739301 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" podStartSLOduration=136.739282116 podStartE2EDuration="2m16.739282116s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:07.737314954 +0000 UTC m=+159.400294857" watchObservedRunningTime="2026-01-29 12:07:07.739282116 +0000 UTC m=+159.402262019" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.775483 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-98jrd" podStartSLOduration=137.775457885 podStartE2EDuration="2m17.775457885s" podCreationTimestamp="2026-01-29 12:04:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:07.771929915 +0000 UTC m=+159.434909828" watchObservedRunningTime="2026-01-29 12:07:07.775457885 +0000 UTC m=+159.438437778" Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.811720 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.811929 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.311899013 +0000 UTC m=+159.974878906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.812865 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.814666 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.31464262 +0000 UTC m=+159.977622513 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.914201 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.914417 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.414377511 +0000 UTC m=+160.077357404 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:07 crc kubenswrapper[4840]: I0129 12:07:07.914666 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:07 crc kubenswrapper[4840]: E0129 12:07:07.914991 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.414975211 +0000 UTC m=+160.077955104 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.016168 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.016403 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.516365564 +0000 UTC m=+160.179345467 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.016492 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.016853 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.5168379 +0000 UTC m=+160.179817793 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.119336 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.119514 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.619486583 +0000 UTC m=+160.282466476 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.119589 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.119986 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.619977898 +0000 UTC m=+160.282957791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.220669 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.221179 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.721156296 +0000 UTC m=+160.384136189 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.322303 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.322750 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.822707425 +0000 UTC m=+160.485687478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.423558 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.424023 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:08.924006786 +0000 UTC m=+160.586986679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.435028 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:08 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:08 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:08 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.435118 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.525262 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.525757 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.02572535 +0000 UTC m=+160.688705313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.627520 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.627987 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.127947541 +0000 UTC m=+160.790927434 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.682248 4840 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-lcmfc container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" start-of-body= Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.682301 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" podUID="424513c6-89ae-4755-b7e7-8bf503a91cb2" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.20:5443/healthz\": dial tcp 10.217.0.20:5443: connect: connection refused" Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.689118 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-x4chr" Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.729403 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.729801 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.229783179 +0000 UTC m=+160.892763072 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.830812 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.830996 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.330971026 +0000 UTC m=+160.993950919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.831782 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.832134 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.332122473 +0000 UTC m=+160.995102366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.937151 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.937330 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.437296506 +0000 UTC m=+161.100276399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:08 crc kubenswrapper[4840]: I0129 12:07:08.937486 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:08 crc kubenswrapper[4840]: E0129 12:07:08.937828 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.437814972 +0000 UTC m=+161.100794865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.012716 4840 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gl7sc container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.012768 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" podUID="17039ecb-77ce-4ee6-b6e1-dba1aebc2c53" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.012899 4840 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gl7sc container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.013100 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" podUID="17039ecb-77ce-4ee6-b6e1-dba1aebc2c53" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.042330 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:09 crc kubenswrapper[4840]: E0129 12:07:09.042633 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.542617673 +0000 UTC m=+161.205597566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.143260 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:09 crc kubenswrapper[4840]: E0129 12:07:09.143555 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.643540853 +0000 UTC m=+161.306520746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.243977 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:09 crc kubenswrapper[4840]: E0129 12:07:09.244222 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.744166073 +0000 UTC m=+161.407145976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.244306 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:09 crc kubenswrapper[4840]: E0129 12:07:09.244604 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.744592697 +0000 UTC m=+161.407572590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.281972 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-486ss"] Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.282899 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-486ss" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.292556 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.346085 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:09 crc kubenswrapper[4840]: E0129 12:07:09.346282 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.846251539 +0000 UTC m=+161.509231432 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.347209 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:09 crc kubenswrapper[4840]: E0129 12:07:09.347561 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.84755248 +0000 UTC m=+161.510532373 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.356832 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-486ss"] Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.400105 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.425890 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:09 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:09 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:09 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.425999 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.449021 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.449297 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f137467-8040-45d5-bfa1-89860498eb85-catalog-content\") pod \"community-operators-486ss\" (UID: \"3f137467-8040-45d5-bfa1-89860498eb85\") " pod="openshift-marketplace/community-operators-486ss" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.449330 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdj8k\" (UniqueName: \"kubernetes.io/projected/3f137467-8040-45d5-bfa1-89860498eb85-kube-api-access-cdj8k\") pod \"community-operators-486ss\" (UID: \"3f137467-8040-45d5-bfa1-89860498eb85\") " pod="openshift-marketplace/community-operators-486ss" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.449369 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f137467-8040-45d5-bfa1-89860498eb85-utilities\") pod \"community-operators-486ss\" (UID: \"3f137467-8040-45d5-bfa1-89860498eb85\") " pod="openshift-marketplace/community-operators-486ss" Jan 29 12:07:09 crc kubenswrapper[4840]: E0129 12:07:09.449532 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:09.949508802 +0000 UTC m=+161.612488695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.471616 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c2k7s"] Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.473381 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.475722 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.501552 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c2k7s"] Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.551018 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdj8k\" (UniqueName: \"kubernetes.io/projected/3f137467-8040-45d5-bfa1-89860498eb85-kube-api-access-cdj8k\") pod \"community-operators-486ss\" (UID: \"3f137467-8040-45d5-bfa1-89860498eb85\") " pod="openshift-marketplace/community-operators-486ss" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.551104 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f137467-8040-45d5-bfa1-89860498eb85-utilities\") pod \"community-operators-486ss\" (UID: \"3f137467-8040-45d5-bfa1-89860498eb85\") " pod="openshift-marketplace/community-operators-486ss" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.551191 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.551216 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f137467-8040-45d5-bfa1-89860498eb85-catalog-content\") pod \"community-operators-486ss\" (UID: \"3f137467-8040-45d5-bfa1-89860498eb85\") " pod="openshift-marketplace/community-operators-486ss" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.551610 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f137467-8040-45d5-bfa1-89860498eb85-catalog-content\") pod \"community-operators-486ss\" (UID: \"3f137467-8040-45d5-bfa1-89860498eb85\") " pod="openshift-marketplace/community-operators-486ss" Jan 29 12:07:09 crc kubenswrapper[4840]: E0129 12:07:09.552368 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:10.052355971 +0000 UTC m=+161.715335864 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.552606 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f137467-8040-45d5-bfa1-89860498eb85-utilities\") pod \"community-operators-486ss\" (UID: \"3f137467-8040-45d5-bfa1-89860498eb85\") " pod="openshift-marketplace/community-operators-486ss" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.606012 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdj8k\" (UniqueName: \"kubernetes.io/projected/3f137467-8040-45d5-bfa1-89860498eb85-kube-api-access-cdj8k\") pod \"community-operators-486ss\" (UID: \"3f137467-8040-45d5-bfa1-89860498eb85\") " pod="openshift-marketplace/community-operators-486ss" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.614875 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-486ss" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.652682 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.652893 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8720503-5456-4934-a5a9-d58d6eeeb0a6-catalog-content\") pod \"certified-operators-c2k7s\" (UID: \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\") " pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.652921 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnb5p\" (UniqueName: \"kubernetes.io/projected/f8720503-5456-4934-a5a9-d58d6eeeb0a6-kube-api-access-wnb5p\") pod \"certified-operators-c2k7s\" (UID: \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\") " pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.652977 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8720503-5456-4934-a5a9-d58d6eeeb0a6-utilities\") pod \"certified-operators-c2k7s\" (UID: \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\") " pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:07:09 crc kubenswrapper[4840]: E0129 12:07:09.653380 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:10.153311652 +0000 UTC m=+161.816291555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.686481 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tnxg7"] Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.687463 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.712867 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tnxg7"] Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.721838 4840 generic.go:334] "Generic (PLEG): container finished" podID="fe84cd00-e72e-446a-981d-5f7c0e9304af" containerID="e4adac4ebc544b3753286665b6bda3eda7f0d707c5c94fc11ede0e37086f2f4f" exitCode=0 Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.722500 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" event={"ID":"fe84cd00-e72e-446a-981d-5f7c0e9304af","Type":"ContainerDied","Data":"e4adac4ebc544b3753286665b6bda3eda7f0d707c5c94fc11ede0e37086f2f4f"} Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.753896 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8720503-5456-4934-a5a9-d58d6eeeb0a6-utilities\") pod \"certified-operators-c2k7s\" (UID: \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\") " pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.754012 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8720503-5456-4934-a5a9-d58d6eeeb0a6-catalog-content\") pod \"certified-operators-c2k7s\" (UID: \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\") " pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.754043 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnb5p\" (UniqueName: \"kubernetes.io/projected/f8720503-5456-4934-a5a9-d58d6eeeb0a6-kube-api-access-wnb5p\") pod \"certified-operators-c2k7s\" (UID: \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\") " pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.754066 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:09 crc kubenswrapper[4840]: E0129 12:07:09.754338 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:10.254327505 +0000 UTC m=+161.917307398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.754931 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8720503-5456-4934-a5a9-d58d6eeeb0a6-utilities\") pod \"certified-operators-c2k7s\" (UID: \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\") " pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.755301 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8720503-5456-4934-a5a9-d58d6eeeb0a6-catalog-content\") pod \"certified-operators-c2k7s\" (UID: \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\") " pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.795121 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnb5p\" (UniqueName: \"kubernetes.io/projected/f8720503-5456-4934-a5a9-d58d6eeeb0a6-kube-api-access-wnb5p\") pod \"certified-operators-c2k7s\" (UID: \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\") " pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.855013 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.855283 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-catalog-content\") pod \"community-operators-tnxg7\" (UID: \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\") " pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.855372 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-utilities\") pod \"community-operators-tnxg7\" (UID: \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\") " pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.855416 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6dpt\" (UniqueName: \"kubernetes.io/projected/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-kube-api-access-z6dpt\") pod \"community-operators-tnxg7\" (UID: \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\") " pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:07:09 crc kubenswrapper[4840]: E0129 12:07:09.856566 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:10.356548146 +0000 UTC m=+162.019528039 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.882761 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sd7t2"] Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.883821 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.908514 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sd7t2"] Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.956460 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-utilities\") pod \"community-operators-tnxg7\" (UID: \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\") " pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.956521 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.956551 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6dpt\" (UniqueName: \"kubernetes.io/projected/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-kube-api-access-z6dpt\") pod \"community-operators-tnxg7\" (UID: \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\") " pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.956593 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-catalog-content\") pod \"community-operators-tnxg7\" (UID: \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\") " pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.957426 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-catalog-content\") pod \"community-operators-tnxg7\" (UID: \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\") " pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:07:09 crc kubenswrapper[4840]: I0129 12:07:09.957648 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-utilities\") pod \"community-operators-tnxg7\" (UID: \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\") " pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:07:09 crc kubenswrapper[4840]: E0129 12:07:09.957935 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:10.457923839 +0000 UTC m=+162.120903732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.072117 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:10 crc kubenswrapper[4840]: E0129 12:07:10.072925 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:10.572892781 +0000 UTC m=+162.235872674 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.073007 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:10 crc kubenswrapper[4840]: E0129 12:07:10.074045 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:10.574015876 +0000 UTC m=+162.236995769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.078387 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-utilities\") pod \"certified-operators-sd7t2\" (UID: \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\") " pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.078422 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-catalog-content\") pod \"certified-operators-sd7t2\" (UID: \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\") " pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.078442 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnck2\" (UniqueName: \"kubernetes.io/projected/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-kube-api-access-vnck2\") pod \"certified-operators-sd7t2\" (UID: \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\") " pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.095427 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.098500 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6dpt\" (UniqueName: \"kubernetes.io/projected/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-kube-api-access-z6dpt\") pod \"community-operators-tnxg7\" (UID: \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\") " pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.179073 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.179269 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-utilities\") pod \"certified-operators-sd7t2\" (UID: \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\") " pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.179293 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-catalog-content\") pod \"certified-operators-sd7t2\" (UID: \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\") " pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.179312 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnck2\" (UniqueName: \"kubernetes.io/projected/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-kube-api-access-vnck2\") pod \"certified-operators-sd7t2\" (UID: \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\") " pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:07:10 crc kubenswrapper[4840]: E0129 12:07:10.179751 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:10.679736687 +0000 UTC m=+162.342716580 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.180232 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-utilities\") pod \"certified-operators-sd7t2\" (UID: \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\") " pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.180457 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-catalog-content\") pod \"certified-operators-sd7t2\" (UID: \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\") " pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.182718 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.182757 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.183034 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.183130 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.201799 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.201897 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.202011 4840 patch_prober.go:28] interesting pod/console-f9d7485db-dltrs container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.202091 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dltrs" podUID="4832ef07-2202-4b5b-9ed5-70bc621ea8dd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.209112 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.210230 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.210623 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-486ss"] Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.218784 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnck2\" (UniqueName: \"kubernetes.io/projected/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-kube-api-access-vnck2\") pod \"certified-operators-sd7t2\" (UID: \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\") " pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.227123 4840 patch_prober.go:28] interesting pod/apiserver-76f77b778f-98jrd container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.227180 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-98jrd" podUID="e11e87ad-33de-4aeb-9742-e70801a6d526" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.10:8443/livez\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.280426 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:10 crc kubenswrapper[4840]: E0129 12:07:10.284359 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:10.784343582 +0000 UTC m=+162.447323475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.292499 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-gj6zv" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.306868 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.306926 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.307683 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.381767 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:10 crc kubenswrapper[4840]: E0129 12:07:10.383356 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:10.883333641 +0000 UTC m=+162.546313534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.424194 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.436610 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:10 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:10 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:10 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.437031 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.483401 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:10 crc kubenswrapper[4840]: E0129 12:07:10.483709 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:10.983697613 +0000 UTC m=+162.646677506 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.518804 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.584738 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:10 crc kubenswrapper[4840]: E0129 12:07:10.586221 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:11.086162121 +0000 UTC m=+162.749142014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.586514 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:10 crc kubenswrapper[4840]: E0129 12:07:10.587186 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:11.087174362 +0000 UTC m=+162.750154265 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.613446 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c2k7s"] Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.688243 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:10 crc kubenswrapper[4840]: E0129 12:07:10.688724 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:11.18869586 +0000 UTC m=+162.851675753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.739571 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2k7s" event={"ID":"f8720503-5456-4934-a5a9-d58d6eeeb0a6","Type":"ContainerStarted","Data":"15ddf5699af3a0577583e7c62665cbc0fbd14f0ceab114f1e90197ce087b361d"} Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.742697 4840 generic.go:334] "Generic (PLEG): container finished" podID="3f137467-8040-45d5-bfa1-89860498eb85" containerID="1ebdf999fb412d99b0a3a2306bce2995081530907f24d8b236d3ff12d65cab03" exitCode=0 Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.742811 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486ss" event={"ID":"3f137467-8040-45d5-bfa1-89860498eb85","Type":"ContainerDied","Data":"1ebdf999fb412d99b0a3a2306bce2995081530907f24d8b236d3ff12d65cab03"} Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.742857 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486ss" event={"ID":"3f137467-8040-45d5-bfa1-89860498eb85","Type":"ContainerStarted","Data":"3d9c9c71f0533d4fc6dc4a026a8d28ca635f93cf1b650f01218293cb56a52802"} Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.785712 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-78t8x" event={"ID":"a9b060e9-18c0-425e-aa43-ad3c281de751","Type":"ContainerStarted","Data":"9186e06cfd625528d465679c27d9417b188e12d3c30b1f94051115dea56d35cd"} Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.790692 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:10 crc kubenswrapper[4840]: E0129 12:07:10.795395 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:11.295377112 +0000 UTC m=+162.958357005 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.804895 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.852445 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-lcmfc" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.880082 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tnxg7"] Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.888212 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:07:10 crc kubenswrapper[4840]: I0129 12:07:10.905707 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:10 crc kubenswrapper[4840]: E0129 12:07:10.906640 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:11.406613886 +0000 UTC m=+163.069593779 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:10 crc kubenswrapper[4840]: W0129 12:07:10.923353 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfbbce61_92b7_40f9_ac7f_96b2ad2d43b4.slice/crio-21644efa53e305ecf4837889d2a6b6df7c19ec6b36bd677dd865362913f7efca WatchSource:0}: Error finding container 21644efa53e305ecf4837889d2a6b6df7c19ec6b36bd677dd865362913f7efca: Status 404 returned error can't find the container with id 21644efa53e305ecf4837889d2a6b6df7c19ec6b36bd677dd865362913f7efca Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.009898 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:11 crc kubenswrapper[4840]: E0129 12:07:11.010311 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:11.510300153 +0000 UTC m=+163.173280046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.117076 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:11 crc kubenswrapper[4840]: E0129 12:07:11.117503 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:11.617488219 +0000 UTC m=+163.280468112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.135189 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sd7t2"] Jan 29 12:07:11 crc kubenswrapper[4840]: W0129 12:07:11.197987 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93c0bb8f_812d_4bf4_93c4_7ccd557d326d.slice/crio-842b1028f7332e1f79432690a345d79e44f22df7252e7aa42f4a4179a37e8df0 WatchSource:0}: Error finding container 842b1028f7332e1f79432690a345d79e44f22df7252e7aa42f4a4179a37e8df0: Status 404 returned error can't find the container with id 842b1028f7332e1f79432690a345d79e44f22df7252e7aa42f4a4179a37e8df0 Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.206640 4840 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.218330 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:11 crc kubenswrapper[4840]: E0129 12:07:11.218773 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:11.71875867 +0000 UTC m=+163.381738563 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.252838 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.327732 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe84cd00-e72e-446a-981d-5f7c0e9304af-config-volume\") pod \"fe84cd00-e72e-446a-981d-5f7c0e9304af\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.327896 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq9m9\" (UniqueName: \"kubernetes.io/projected/fe84cd00-e72e-446a-981d-5f7c0e9304af-kube-api-access-zq9m9\") pod \"fe84cd00-e72e-446a-981d-5f7c0e9304af\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.328140 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.328212 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe84cd00-e72e-446a-981d-5f7c0e9304af-secret-volume\") pod \"fe84cd00-e72e-446a-981d-5f7c0e9304af\" (UID: \"fe84cd00-e72e-446a-981d-5f7c0e9304af\") " Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.330121 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe84cd00-e72e-446a-981d-5f7c0e9304af-config-volume" (OuterVolumeSpecName: "config-volume") pod "fe84cd00-e72e-446a-981d-5f7c0e9304af" (UID: "fe84cd00-e72e-446a-981d-5f7c0e9304af"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:07:11 crc kubenswrapper[4840]: E0129 12:07:11.330303 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:11.830270042 +0000 UTC m=+163.493249945 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.357609 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe84cd00-e72e-446a-981d-5f7c0e9304af-kube-api-access-zq9m9" (OuterVolumeSpecName: "kube-api-access-zq9m9") pod "fe84cd00-e72e-446a-981d-5f7c0e9304af" (UID: "fe84cd00-e72e-446a-981d-5f7c0e9304af"). InnerVolumeSpecName "kube-api-access-zq9m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.357985 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe84cd00-e72e-446a-981d-5f7c0e9304af-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fe84cd00-e72e-446a-981d-5f7c0e9304af" (UID: "fe84cd00-e72e-446a-981d-5f7c0e9304af"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.408098 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 12:07:11 crc kubenswrapper[4840]: E0129 12:07:11.408316 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe84cd00-e72e-446a-981d-5f7c0e9304af" containerName="collect-profiles" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.408327 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe84cd00-e72e-446a-981d-5f7c0e9304af" containerName="collect-profiles" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.408442 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe84cd00-e72e-446a-981d-5f7c0e9304af" containerName="collect-profiles" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.408829 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.412656 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.415641 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.420500 4840 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-kc2rh container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 29 12:07:11 crc kubenswrapper[4840]: [+]log ok Jan 29 12:07:11 crc kubenswrapper[4840]: [+]etcd ok Jan 29 12:07:11 crc kubenswrapper[4840]: [+]etcd-readiness ok Jan 29 12:07:11 crc kubenswrapper[4840]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 29 12:07:11 crc kubenswrapper[4840]: [-]informer-sync failed: reason withheld Jan 29 12:07:11 crc kubenswrapper[4840]: [+]poststarthook/generic-apiserver-start-informers ok Jan 29 12:07:11 crc kubenswrapper[4840]: [+]poststarthook/max-in-flight-filter ok Jan 29 12:07:11 crc kubenswrapper[4840]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 29 12:07:11 crc kubenswrapper[4840]: [+]poststarthook/openshift.io-StartUserInformer ok Jan 29 12:07:11 crc kubenswrapper[4840]: [+]poststarthook/openshift.io-StartOAuthInformer ok Jan 29 12:07:11 crc kubenswrapper[4840]: [+]poststarthook/openshift.io-StartTokenTimeoutUpdater ok Jan 29 12:07:11 crc kubenswrapper[4840]: [+]shutdown ok Jan 29 12:07:11 crc kubenswrapper[4840]: readyz check failed Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.420581 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" podUID="7d571f03-5043-4b23-bfe9-7d82584ef243" containerName="oauth-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.422569 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.429530 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:11 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:11 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:11 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.429612 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.430547 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.430673 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fe84cd00-e72e-446a-981d-5f7c0e9304af-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.430698 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe84cd00-e72e-446a-981d-5f7c0e9304af-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.430712 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq9m9\" (UniqueName: \"kubernetes.io/projected/fe84cd00-e72e-446a-981d-5f7c0e9304af-kube-api-access-zq9m9\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:11 crc kubenswrapper[4840]: E0129 12:07:11.432615 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:11.932589385 +0000 UTC m=+163.595569278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.433785 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.478073 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-r6w65"] Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.483246 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.486728 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.503551 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6w65"] Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.531787 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:11 crc kubenswrapper[4840]: E0129 12:07:11.532230 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-29 12:07:12.032193403 +0000 UTC m=+163.695173296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.532486 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rqz7\" (UniqueName: \"kubernetes.io/projected/12410648-0772-40f7-9261-107634802711-kube-api-access-6rqz7\") pod \"redhat-marketplace-r6w65\" (UID: \"12410648-0772-40f7-9261-107634802711\") " pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.532570 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c3f64db-9739-4507-9f5b-99c96ab60ce4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5c3f64db-9739-4507-9f5b-99c96ab60ce4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.532653 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.532677 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12410648-0772-40f7-9261-107634802711-utilities\") pod \"redhat-marketplace-r6w65\" (UID: \"12410648-0772-40f7-9261-107634802711\") " pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.532761 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12410648-0772-40f7-9261-107634802711-catalog-content\") pod \"redhat-marketplace-r6w65\" (UID: \"12410648-0772-40f7-9261-107634802711\") " pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.532806 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c3f64db-9739-4507-9f5b-99c96ab60ce4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5c3f64db-9739-4507-9f5b-99c96ab60ce4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: E0129 12:07:11.533515 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-29 12:07:12.033506994 +0000 UTC m=+163.696486887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-8fwb9" (UID: "01215522-37e6-4461-91d1-f695896d6ede") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.613932 4840 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-29T12:07:11.206665828Z","Handler":null,"Name":""} Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.633968 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.634396 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12410648-0772-40f7-9261-107634802711-catalog-content\") pod \"redhat-marketplace-r6w65\" (UID: \"12410648-0772-40f7-9261-107634802711\") " pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.634444 4840 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.634491 4840 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.634449 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c3f64db-9739-4507-9f5b-99c96ab60ce4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5c3f64db-9739-4507-9f5b-99c96ab60ce4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.634842 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rqz7\" (UniqueName: \"kubernetes.io/projected/12410648-0772-40f7-9261-107634802711-kube-api-access-6rqz7\") pod \"redhat-marketplace-r6w65\" (UID: \"12410648-0772-40f7-9261-107634802711\") " pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.634916 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c3f64db-9739-4507-9f5b-99c96ab60ce4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5c3f64db-9739-4507-9f5b-99c96ab60ce4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.635036 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12410648-0772-40f7-9261-107634802711-utilities\") pod \"redhat-marketplace-r6w65\" (UID: \"12410648-0772-40f7-9261-107634802711\") " pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.635270 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c3f64db-9739-4507-9f5b-99c96ab60ce4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5c3f64db-9739-4507-9f5b-99c96ab60ce4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.635710 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12410648-0772-40f7-9261-107634802711-catalog-content\") pod \"redhat-marketplace-r6w65\" (UID: \"12410648-0772-40f7-9261-107634802711\") " pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.635890 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12410648-0772-40f7-9261-107634802711-utilities\") pod \"redhat-marketplace-r6w65\" (UID: \"12410648-0772-40f7-9261-107634802711\") " pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.636096 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.637294 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.643603 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.643896 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.648068 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.664877 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c3f64db-9739-4507-9f5b-99c96ab60ce4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5c3f64db-9739-4507-9f5b-99c96ab60ce4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.666023 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rqz7\" (UniqueName: \"kubernetes.io/projected/12410648-0772-40f7-9261-107634802711-kube-api-access-6rqz7\") pod \"redhat-marketplace-r6w65\" (UID: \"12410648-0772-40f7-9261-107634802711\") " pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.699387 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.729189 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.736005 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be2ab516-81c5-4625-a36f-8fc4770daf66-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"be2ab516-81c5-4625-a36f-8fc4770daf66\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.736449 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.736640 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be2ab516-81c5-4625-a36f-8fc4770daf66-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"be2ab516-81c5-4625-a36f-8fc4770daf66\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.742948 4840 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.743029 4840 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.797157 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-8fwb9\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.798606 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" event={"ID":"fe84cd00-e72e-446a-981d-5f7c0e9304af","Type":"ContainerDied","Data":"6432fbbedcd59d39e979d194574472c75276b4593cbf3de68cb30aa0221b48d2"} Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.798650 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6432fbbedcd59d39e979d194574472c75276b4593cbf3de68cb30aa0221b48d2" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.798934 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.811656 4840 generic.go:334] "Generic (PLEG): container finished" podID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" containerID="ec8c004f1462cb40bdd2a8a69cd2941545feef96c6680123a23738ba01506491" exitCode=0 Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.812148 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2k7s" event={"ID":"f8720503-5456-4934-a5a9-d58d6eeeb0a6","Type":"ContainerDied","Data":"ec8c004f1462cb40bdd2a8a69cd2941545feef96c6680123a23738ba01506491"} Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.820836 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-78t8x" event={"ID":"a9b060e9-18c0-425e-aa43-ad3c281de751","Type":"ContainerStarted","Data":"37afe784aee8fd2c1bab1baeee643a73d738590d4e27b8778a6a301eea2f0e03"} Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.820886 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-78t8x" event={"ID":"a9b060e9-18c0-425e-aa43-ad3c281de751","Type":"ContainerStarted","Data":"920e1b46b11056125612aeedd7fd7ac8c486ef864f7b10ec9391da1012d7055e"} Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.829263 4840 generic.go:334] "Generic (PLEG): container finished" podID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" containerID="e069aa2325d64d42fc5d8cd586991846ed147a292f5d22b8c6abe3b22827f0c2" exitCode=0 Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.829353 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd7t2" event={"ID":"93c0bb8f-812d-4bf4-93c4-7ccd557d326d","Type":"ContainerDied","Data":"e069aa2325d64d42fc5d8cd586991846ed147a292f5d22b8c6abe3b22827f0c2"} Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.829384 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd7t2" event={"ID":"93c0bb8f-812d-4bf4-93c4-7ccd557d326d","Type":"ContainerStarted","Data":"842b1028f7332e1f79432690a345d79e44f22df7252e7aa42f4a4179a37e8df0"} Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.831904 4840 generic.go:334] "Generic (PLEG): container finished" podID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" containerID="06920e79c7c8b817eee3166d052286186022d12805f9c627d23029c131ef345a" exitCode=0 Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.831978 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxg7" event={"ID":"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4","Type":"ContainerDied","Data":"06920e79c7c8b817eee3166d052286186022d12805f9c627d23029c131ef345a"} Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.832006 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxg7" event={"ID":"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4","Type":"ContainerStarted","Data":"21644efa53e305ecf4837889d2a6b6df7c19ec6b36bd677dd865362913f7efca"} Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.837644 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be2ab516-81c5-4625-a36f-8fc4770daf66-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"be2ab516-81c5-4625-a36f-8fc4770daf66\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.837704 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be2ab516-81c5-4625-a36f-8fc4770daf66-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"be2ab516-81c5-4625-a36f-8fc4770daf66\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.838515 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be2ab516-81c5-4625-a36f-8fc4770daf66-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"be2ab516-81c5-4625-a36f-8fc4770daf66\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.843912 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.871791 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.877994 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be2ab516-81c5-4625-a36f-8fc4770daf66-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"be2ab516-81c5-4625-a36f-8fc4770daf66\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.906592 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vgg49"] Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.907737 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.911474 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgg49"] Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.939114 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcb94932-ec4e-48f5-bda1-51aa89a53087-utilities\") pod \"redhat-marketplace-vgg49\" (UID: \"fcb94932-ec4e-48f5-bda1-51aa89a53087\") " pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.939168 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kndxl\" (UniqueName: \"kubernetes.io/projected/fcb94932-ec4e-48f5-bda1-51aa89a53087-kube-api-access-kndxl\") pod \"redhat-marketplace-vgg49\" (UID: \"fcb94932-ec4e-48f5-bda1-51aa89a53087\") " pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:07:11 crc kubenswrapper[4840]: I0129 12:07:11.939345 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcb94932-ec4e-48f5-bda1-51aa89a53087-catalog-content\") pod \"redhat-marketplace-vgg49\" (UID: \"fcb94932-ec4e-48f5-bda1-51aa89a53087\") " pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.020886 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gl7sc" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.027401 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.059375 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-78t8x" podStartSLOduration=15.05934946 podStartE2EDuration="15.05934946s" podCreationTimestamp="2026-01-29 12:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:12.036748918 +0000 UTC m=+163.699728811" watchObservedRunningTime="2026-01-29 12:07:12.05934946 +0000 UTC m=+163.722329353" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.062055 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcb94932-ec4e-48f5-bda1-51aa89a53087-catalog-content\") pod \"redhat-marketplace-vgg49\" (UID: \"fcb94932-ec4e-48f5-bda1-51aa89a53087\") " pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.073453 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcb94932-ec4e-48f5-bda1-51aa89a53087-catalog-content\") pod \"redhat-marketplace-vgg49\" (UID: \"fcb94932-ec4e-48f5-bda1-51aa89a53087\") " pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.081229 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcb94932-ec4e-48f5-bda1-51aa89a53087-utilities\") pod \"redhat-marketplace-vgg49\" (UID: \"fcb94932-ec4e-48f5-bda1-51aa89a53087\") " pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.081341 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kndxl\" (UniqueName: \"kubernetes.io/projected/fcb94932-ec4e-48f5-bda1-51aa89a53087-kube-api-access-kndxl\") pod \"redhat-marketplace-vgg49\" (UID: \"fcb94932-ec4e-48f5-bda1-51aa89a53087\") " pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.082234 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcb94932-ec4e-48f5-bda1-51aa89a53087-utilities\") pod \"redhat-marketplace-vgg49\" (UID: \"fcb94932-ec4e-48f5-bda1-51aa89a53087\") " pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.149142 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kndxl\" (UniqueName: \"kubernetes.io/projected/fcb94932-ec4e-48f5-bda1-51aa89a53087-kube-api-access-kndxl\") pod \"redhat-marketplace-vgg49\" (UID: \"fcb94932-ec4e-48f5-bda1-51aa89a53087\") " pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.293140 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.342245 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.428618 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:12 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:12 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:12 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.428697 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.471082 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jj2gn"] Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.472446 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.475978 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.500865 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jj2gn"] Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.531446 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8fwb9"] Jan 29 12:07:12 crc kubenswrapper[4840]: W0129 12:07:12.581855 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01215522_37e6_4461_91d1_f695896d6ede.slice/crio-eefb02e2593812958d02fe9ed5fdb2692940ecda96f9d6fde565e6ca19a0cb83 WatchSource:0}: Error finding container eefb02e2593812958d02fe9ed5fdb2692940ecda96f9d6fde565e6ca19a0cb83: Status 404 returned error can't find the container with id eefb02e2593812958d02fe9ed5fdb2692940ecda96f9d6fde565e6ca19a0cb83 Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.602757 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-catalog-content\") pod \"redhat-operators-jj2gn\" (UID: \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\") " pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.602805 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-utilities\") pod \"redhat-operators-jj2gn\" (UID: \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\") " pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.602844 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvfk6\" (UniqueName: \"kubernetes.io/projected/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-kube-api-access-fvfk6\") pod \"redhat-operators-jj2gn\" (UID: \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\") " pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.625276 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6w65"] Jan 29 12:07:12 crc kubenswrapper[4840]: W0129 12:07:12.645371 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12410648_0772_40f7_9261_107634802711.slice/crio-300fdfc67c07d9980861b9c7777a9831b8c0da119394577be27e1f3086ba5790 WatchSource:0}: Error finding container 300fdfc67c07d9980861b9c7777a9831b8c0da119394577be27e1f3086ba5790: Status 404 returned error can't find the container with id 300fdfc67c07d9980861b9c7777a9831b8c0da119394577be27e1f3086ba5790 Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.704234 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvfk6\" (UniqueName: \"kubernetes.io/projected/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-kube-api-access-fvfk6\") pod \"redhat-operators-jj2gn\" (UID: \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\") " pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.704356 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-catalog-content\") pod \"redhat-operators-jj2gn\" (UID: \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\") " pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.704382 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-utilities\") pod \"redhat-operators-jj2gn\" (UID: \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\") " pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.704794 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-utilities\") pod \"redhat-operators-jj2gn\" (UID: \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\") " pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.705990 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-catalog-content\") pod \"redhat-operators-jj2gn\" (UID: \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\") " pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.729588 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgg49"] Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.742558 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvfk6\" (UniqueName: \"kubernetes.io/projected/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-kube-api-access-fvfk6\") pod \"redhat-operators-jj2gn\" (UID: \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\") " pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.806693 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.821123 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.871896 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pqht4"] Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.873125 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.904357 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" event={"ID":"01215522-37e6-4461-91d1-f695896d6ede","Type":"ContainerStarted","Data":"eefb02e2593812958d02fe9ed5fdb2692940ecda96f9d6fde565e6ca19a0cb83"} Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.931697 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6w65" event={"ID":"12410648-0772-40f7-9261-107634802711","Type":"ContainerStarted","Data":"300fdfc67c07d9980861b9c7777a9831b8c0da119394577be27e1f3086ba5790"} Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.935843 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgg49" event={"ID":"fcb94932-ec4e-48f5-bda1-51aa89a53087","Type":"ContainerStarted","Data":"b394c303327cba9606513796f8e6a6d35b6589ffda17a0f2a0c89cee68c0c854"} Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.936326 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pqht4"] Jan 29 12:07:12 crc kubenswrapper[4840]: I0129 12:07:12.954036 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5c3f64db-9739-4507-9f5b-99c96ab60ce4","Type":"ContainerStarted","Data":"1dfee71d61a4e2559852e506673a46ca840552e31101dcafb20390f6c66cb2b6"} Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.008897 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-catalog-content\") pod \"redhat-operators-pqht4\" (UID: \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\") " pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.009315 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jznq\" (UniqueName: \"kubernetes.io/projected/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-kube-api-access-6jznq\") pod \"redhat-operators-pqht4\" (UID: \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\") " pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.009368 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-utilities\") pod \"redhat-operators-pqht4\" (UID: \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\") " pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.025765 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 29 12:07:13 crc kubenswrapper[4840]: W0129 12:07:13.071906 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podbe2ab516_81c5_4625_a36f_8fc4770daf66.slice/crio-c40261cf511dfc10610aebcf9e3493682d84c02a75fbb29313a0e51637b1a0be WatchSource:0}: Error finding container c40261cf511dfc10610aebcf9e3493682d84c02a75fbb29313a0e51637b1a0be: Status 404 returned error can't find the container with id c40261cf511dfc10610aebcf9e3493682d84c02a75fbb29313a0e51637b1a0be Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.115568 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-catalog-content\") pod \"redhat-operators-pqht4\" (UID: \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\") " pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.115637 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jznq\" (UniqueName: \"kubernetes.io/projected/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-kube-api-access-6jznq\") pod \"redhat-operators-pqht4\" (UID: \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\") " pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.115668 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-utilities\") pod \"redhat-operators-pqht4\" (UID: \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\") " pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.116284 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-utilities\") pod \"redhat-operators-pqht4\" (UID: \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\") " pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.116604 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-catalog-content\") pod \"redhat-operators-pqht4\" (UID: \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\") " pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.149063 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jznq\" (UniqueName: \"kubernetes.io/projected/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-kube-api-access-6jznq\") pod \"redhat-operators-pqht4\" (UID: \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\") " pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.187499 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.318045 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.326347 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c69a828d-5ed4-45ac-95a4-f0cc698d6992-metrics-certs\") pod \"network-metrics-daemon-mnzvc\" (UID: \"c69a828d-5ed4-45ac-95a4-f0cc698d6992\") " pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.436181 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-mnzvc" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.453764 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:13 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:13 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:13 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.454385 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.458443 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jj2gn"] Jan 29 12:07:13 crc kubenswrapper[4840]: W0129 12:07:13.658058 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd2c47ef_59b6_409e_8091_2e6f20eb89cb.slice/crio-22af931aee8fdc8f8a3ec880f6f42bdc51eb9f0feed250a71ec464ac771462bb WatchSource:0}: Error finding container 22af931aee8fdc8f8a3ec880f6f42bdc51eb9f0feed250a71ec464ac771462bb: Status 404 returned error can't find the container with id 22af931aee8fdc8f8a3ec880f6f42bdc51eb9f0feed250a71ec464ac771462bb Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.835975 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pqht4"] Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.926143 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-mnzvc"] Jan 29 12:07:13 crc kubenswrapper[4840]: W0129 12:07:13.982110 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc69a828d_5ed4_45ac_95a4_f0cc698d6992.slice/crio-66edbddf483bd7fd4a8c655d25a344faf1637cbcfed4bfb84319eb92e262f090 WatchSource:0}: Error finding container 66edbddf483bd7fd4a8c655d25a344faf1637cbcfed4bfb84319eb92e262f090: Status 404 returned error can't find the container with id 66edbddf483bd7fd4a8c655d25a344faf1637cbcfed4bfb84319eb92e262f090 Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.989907 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj2gn" event={"ID":"dd2c47ef-59b6-409e-8091-2e6f20eb89cb","Type":"ContainerStarted","Data":"d58a8f2c253b6fd6b5ffafee7b303f1ac1068c4c1cda80bef1ce2432acf30330"} Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.989965 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj2gn" event={"ID":"dd2c47ef-59b6-409e-8091-2e6f20eb89cb","Type":"ContainerStarted","Data":"22af931aee8fdc8f8a3ec880f6f42bdc51eb9f0feed250a71ec464ac771462bb"} Jan 29 12:07:13 crc kubenswrapper[4840]: I0129 12:07:13.999319 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" event={"ID":"01215522-37e6-4461-91d1-f695896d6ede","Type":"ContainerStarted","Data":"20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892"} Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.000392 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.004352 4840 generic.go:334] "Generic (PLEG): container finished" podID="12410648-0772-40f7-9261-107634802711" containerID="855d4beb25910e0f2adac654bce3abb559531acf76e239a55d0e82d7582ab7c7" exitCode=0 Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.004691 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6w65" event={"ID":"12410648-0772-40f7-9261-107634802711","Type":"ContainerDied","Data":"855d4beb25910e0f2adac654bce3abb559531acf76e239a55d0e82d7582ab7c7"} Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.020835 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5c3f64db-9739-4507-9f5b-99c96ab60ce4","Type":"ContainerStarted","Data":"432fc68b61bced7bcbcc0ec24232205ad8da2f135c67f748240db26a72837b5a"} Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.025291 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqht4" event={"ID":"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e","Type":"ContainerStarted","Data":"14ec9233491bc54db1e03c95cf5f9ff671a44768876c89a07adb11579276a858"} Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.028585 4840 generic.go:334] "Generic (PLEG): container finished" podID="fcb94932-ec4e-48f5-bda1-51aa89a53087" containerID="3bb438cb58947bcaeed5973bb148b1728fb93dcc1033ad5125129eb50b967051" exitCode=0 Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.028645 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgg49" event={"ID":"fcb94932-ec4e-48f5-bda1-51aa89a53087","Type":"ContainerDied","Data":"3bb438cb58947bcaeed5973bb148b1728fb93dcc1033ad5125129eb50b967051"} Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.042919 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"be2ab516-81c5-4625-a36f-8fc4770daf66","Type":"ContainerStarted","Data":"087ac05ef33951445524f39db9a039c5bc4bd7d85c3788f8354608ce470f6a67"} Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.043571 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"be2ab516-81c5-4625-a36f-8fc4770daf66","Type":"ContainerStarted","Data":"c40261cf511dfc10610aebcf9e3493682d84c02a75fbb29313a0e51637b1a0be"} Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.052898 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" podStartSLOduration=143.05286515 podStartE2EDuration="2m23.05286515s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:14.036069261 +0000 UTC m=+165.699049154" watchObservedRunningTime="2026-01-29 12:07:14.05286515 +0000 UTC m=+165.715845043" Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.119859 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.119838339 podStartE2EDuration="3.119838339s" podCreationTimestamp="2026-01-29 12:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:14.119643623 +0000 UTC m=+165.782623536" watchObservedRunningTime="2026-01-29 12:07:14.119838339 +0000 UTC m=+165.782818232" Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.162036 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.161990858 podStartE2EDuration="3.161990858s" podCreationTimestamp="2026-01-29 12:07:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:14.153247022 +0000 UTC m=+165.816226915" watchObservedRunningTime="2026-01-29 12:07:14.161990858 +0000 UTC m=+165.824970761" Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.429132 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:14 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:14 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:14 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:14 crc kubenswrapper[4840]: I0129 12:07:14.429191 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.061658 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" event={"ID":"c69a828d-5ed4-45ac-95a4-f0cc698d6992","Type":"ContainerStarted","Data":"33c27bd61977c19fc9a1e6090b28e072fe51c462766acd6d1abc6e56efb9d17e"} Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.062119 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" event={"ID":"c69a828d-5ed4-45ac-95a4-f0cc698d6992","Type":"ContainerStarted","Data":"66edbddf483bd7fd4a8c655d25a344faf1637cbcfed4bfb84319eb92e262f090"} Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.066655 4840 generic.go:334] "Generic (PLEG): container finished" podID="be2ab516-81c5-4625-a36f-8fc4770daf66" containerID="087ac05ef33951445524f39db9a039c5bc4bd7d85c3788f8354608ce470f6a67" exitCode=0 Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.066796 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"be2ab516-81c5-4625-a36f-8fc4770daf66","Type":"ContainerDied","Data":"087ac05ef33951445524f39db9a039c5bc4bd7d85c3788f8354608ce470f6a67"} Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.068643 4840 generic.go:334] "Generic (PLEG): container finished" podID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerID="d58a8f2c253b6fd6b5ffafee7b303f1ac1068c4c1cda80bef1ce2432acf30330" exitCode=0 Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.068718 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj2gn" event={"ID":"dd2c47ef-59b6-409e-8091-2e6f20eb89cb","Type":"ContainerDied","Data":"d58a8f2c253b6fd6b5ffafee7b303f1ac1068c4c1cda80bef1ce2432acf30330"} Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.073410 4840 generic.go:334] "Generic (PLEG): container finished" podID="5c3f64db-9739-4507-9f5b-99c96ab60ce4" containerID="432fc68b61bced7bcbcc0ec24232205ad8da2f135c67f748240db26a72837b5a" exitCode=0 Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.073514 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5c3f64db-9739-4507-9f5b-99c96ab60ce4","Type":"ContainerDied","Data":"432fc68b61bced7bcbcc0ec24232205ad8da2f135c67f748240db26a72837b5a"} Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.081275 4840 generic.go:334] "Generic (PLEG): container finished" podID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" containerID="a3b75d2df9ccf2c2173cd489118f8275ccc7f70406d9e7da015329c74e111eb6" exitCode=0 Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.081377 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqht4" event={"ID":"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e","Type":"ContainerDied","Data":"a3b75d2df9ccf2c2173cd489118f8275ccc7f70406d9e7da015329c74e111eb6"} Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.222606 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.230851 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-98jrd" Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.319545 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-kc2rh" Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.453498 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:15 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:15 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:15 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.453610 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:15 crc kubenswrapper[4840]: I0129 12:07:15.890783 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-x9hv2" Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.114235 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-mnzvc" event={"ID":"c69a828d-5ed4-45ac-95a4-f0cc698d6992","Type":"ContainerStarted","Data":"1bff0904299d751e92480df12ef8784c7710f0490e91ecabbd91548d3bfe10e9"} Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.429230 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:16 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:16 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:16 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.429332 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.702263 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.726034 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-mnzvc" podStartSLOduration=145.72601037 podStartE2EDuration="2m25.72601037s" podCreationTimestamp="2026-01-29 12:04:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:07:16.146183184 +0000 UTC m=+167.809163087" watchObservedRunningTime="2026-01-29 12:07:16.72601037 +0000 UTC m=+168.388990263" Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.810142 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.821487 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be2ab516-81c5-4625-a36f-8fc4770daf66-kubelet-dir\") pod \"be2ab516-81c5-4625-a36f-8fc4770daf66\" (UID: \"be2ab516-81c5-4625-a36f-8fc4770daf66\") " Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.821590 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be2ab516-81c5-4625-a36f-8fc4770daf66-kube-api-access\") pod \"be2ab516-81c5-4625-a36f-8fc4770daf66\" (UID: \"be2ab516-81c5-4625-a36f-8fc4770daf66\") " Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.821620 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be2ab516-81c5-4625-a36f-8fc4770daf66-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "be2ab516-81c5-4625-a36f-8fc4770daf66" (UID: "be2ab516-81c5-4625-a36f-8fc4770daf66"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.822007 4840 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/be2ab516-81c5-4625-a36f-8fc4770daf66-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.851710 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be2ab516-81c5-4625-a36f-8fc4770daf66-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "be2ab516-81c5-4625-a36f-8fc4770daf66" (UID: "be2ab516-81c5-4625-a36f-8fc4770daf66"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.922606 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c3f64db-9739-4507-9f5b-99c96ab60ce4-kubelet-dir\") pod \"5c3f64db-9739-4507-9f5b-99c96ab60ce4\" (UID: \"5c3f64db-9739-4507-9f5b-99c96ab60ce4\") " Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.922772 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c3f64db-9739-4507-9f5b-99c96ab60ce4-kube-api-access\") pod \"5c3f64db-9739-4507-9f5b-99c96ab60ce4\" (UID: \"5c3f64db-9739-4507-9f5b-99c96ab60ce4\") " Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.922769 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c3f64db-9739-4507-9f5b-99c96ab60ce4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5c3f64db-9739-4507-9f5b-99c96ab60ce4" (UID: "5c3f64db-9739-4507-9f5b-99c96ab60ce4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.923063 4840 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c3f64db-9739-4507-9f5b-99c96ab60ce4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.923077 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/be2ab516-81c5-4625-a36f-8fc4770daf66-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:16 crc kubenswrapper[4840]: I0129 12:07:16.932478 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c3f64db-9739-4507-9f5b-99c96ab60ce4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5c3f64db-9739-4507-9f5b-99c96ab60ce4" (UID: "5c3f64db-9739-4507-9f5b-99c96ab60ce4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:07:17 crc kubenswrapper[4840]: I0129 12:07:17.024383 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5c3f64db-9739-4507-9f5b-99c96ab60ce4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:17 crc kubenswrapper[4840]: I0129 12:07:17.182442 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 29 12:07:17 crc kubenswrapper[4840]: I0129 12:07:17.182490 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"be2ab516-81c5-4625-a36f-8fc4770daf66","Type":"ContainerDied","Data":"c40261cf511dfc10610aebcf9e3493682d84c02a75fbb29313a0e51637b1a0be"} Jan 29 12:07:17 crc kubenswrapper[4840]: I0129 12:07:17.182576 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c40261cf511dfc10610aebcf9e3493682d84c02a75fbb29313a0e51637b1a0be" Jan 29 12:07:17 crc kubenswrapper[4840]: I0129 12:07:17.193320 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5c3f64db-9739-4507-9f5b-99c96ab60ce4","Type":"ContainerDied","Data":"1dfee71d61a4e2559852e506673a46ca840552e31101dcafb20390f6c66cb2b6"} Jan 29 12:07:17 crc kubenswrapper[4840]: I0129 12:07:17.193364 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 29 12:07:17 crc kubenswrapper[4840]: I0129 12:07:17.193383 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1dfee71d61a4e2559852e506673a46ca840552e31101dcafb20390f6c66cb2b6" Jan 29 12:07:17 crc kubenswrapper[4840]: I0129 12:07:17.427529 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:17 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:17 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:17 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:17 crc kubenswrapper[4840]: I0129 12:07:17.427622 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:18 crc kubenswrapper[4840]: I0129 12:07:18.423739 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:18 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:18 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:18 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:18 crc kubenswrapper[4840]: I0129 12:07:18.424299 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:19 crc kubenswrapper[4840]: I0129 12:07:19.433003 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:19 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:19 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:19 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:19 crc kubenswrapper[4840]: I0129 12:07:19.433065 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:20 crc kubenswrapper[4840]: I0129 12:07:20.183243 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:07:20 crc kubenswrapper[4840]: I0129 12:07:20.183303 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:07:20 crc kubenswrapper[4840]: I0129 12:07:20.183722 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:07:20 crc kubenswrapper[4840]: I0129 12:07:20.183756 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:07:20 crc kubenswrapper[4840]: I0129 12:07:20.201229 4840 patch_prober.go:28] interesting pod/console-f9d7485db-dltrs container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 29 12:07:20 crc kubenswrapper[4840]: I0129 12:07:20.201390 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-dltrs" podUID="4832ef07-2202-4b5b-9ed5-70bc621ea8dd" containerName="console" probeResult="failure" output="Get \"https://10.217.0.9:8443/health\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 29 12:07:20 crc kubenswrapper[4840]: I0129 12:07:20.442579 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:20 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:20 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:20 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:20 crc kubenswrapper[4840]: I0129 12:07:20.443679 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:21 crc kubenswrapper[4840]: I0129 12:07:21.423995 4840 patch_prober.go:28] interesting pod/router-default-5444994796-dz9sd container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 29 12:07:21 crc kubenswrapper[4840]: [-]has-synced failed: reason withheld Jan 29 12:07:21 crc kubenswrapper[4840]: [+]process-running ok Jan 29 12:07:21 crc kubenswrapper[4840]: healthz check failed Jan 29 12:07:21 crc kubenswrapper[4840]: I0129 12:07:21.424114 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-dz9sd" podUID="f1a38ee6-33eb-4c68-955a-1253ef95d412" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 29 12:07:22 crc kubenswrapper[4840]: I0129 12:07:22.427078 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:07:22 crc kubenswrapper[4840]: I0129 12:07:22.430415 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-dz9sd" Jan 29 12:07:23 crc kubenswrapper[4840]: I0129 12:07:23.522273 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:07:23 crc kubenswrapper[4840]: I0129 12:07:23.522808 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:07:26 crc kubenswrapper[4840]: I0129 12:07:26.177853 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bg94b"] Jan 29 12:07:26 crc kubenswrapper[4840]: I0129 12:07:26.178387 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" podUID="bc4198ee-d503-4b4d-81bf-f8ce73e9de18" containerName="controller-manager" containerID="cri-o://24d46f7df412f907f5e34ebd46430c8c30373de0baaaf6723cfcf8f1ab6be320" gracePeriod=30 Jan 29 12:07:26 crc kubenswrapper[4840]: I0129 12:07:26.193481 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg"] Jan 29 12:07:26 crc kubenswrapper[4840]: I0129 12:07:26.193731 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" podUID="03b827e0-ff80-4021-9b29-72935f8fe30b" containerName="route-controller-manager" containerID="cri-o://d3d61c2960c16fd52a16e18f9f0cad71db5161440ec99088d64c96feaf059fd0" gracePeriod=30 Jan 29 12:07:27 crc kubenswrapper[4840]: I0129 12:07:27.349617 4840 generic.go:334] "Generic (PLEG): container finished" podID="03b827e0-ff80-4021-9b29-72935f8fe30b" containerID="d3d61c2960c16fd52a16e18f9f0cad71db5161440ec99088d64c96feaf059fd0" exitCode=0 Jan 29 12:07:27 crc kubenswrapper[4840]: I0129 12:07:27.349685 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" event={"ID":"03b827e0-ff80-4021-9b29-72935f8fe30b","Type":"ContainerDied","Data":"d3d61c2960c16fd52a16e18f9f0cad71db5161440ec99088d64c96feaf059fd0"} Jan 29 12:07:27 crc kubenswrapper[4840]: I0129 12:07:27.354615 4840 generic.go:334] "Generic (PLEG): container finished" podID="bc4198ee-d503-4b4d-81bf-f8ce73e9de18" containerID="24d46f7df412f907f5e34ebd46430c8c30373de0baaaf6723cfcf8f1ab6be320" exitCode=0 Jan 29 12:07:27 crc kubenswrapper[4840]: I0129 12:07:27.354651 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" event={"ID":"bc4198ee-d503-4b4d-81bf-f8ce73e9de18","Type":"ContainerDied","Data":"24d46f7df412f907f5e34ebd46430c8c30373de0baaaf6723cfcf8f1ab6be320"} Jan 29 12:07:29 crc kubenswrapper[4840]: I0129 12:07:29.393721 4840 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bg94b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 12:07:29 crc kubenswrapper[4840]: I0129 12:07:29.394509 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" podUID="bc4198ee-d503-4b4d-81bf-f8ce73e9de18" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 12:07:29 crc kubenswrapper[4840]: I0129 12:07:29.963157 4840 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8pndg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 29 12:07:29 crc kubenswrapper[4840]: I0129 12:07:29.963317 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" podUID="03b827e0-ff80-4021-9b29-72935f8fe30b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 29 12:07:30 crc kubenswrapper[4840]: I0129 12:07:30.182746 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:07:30 crc kubenswrapper[4840]: I0129 12:07:30.182893 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:07:30 crc kubenswrapper[4840]: I0129 12:07:30.182762 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:07:30 crc kubenswrapper[4840]: I0129 12:07:30.183027 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:07:30 crc kubenswrapper[4840]: I0129 12:07:30.183099 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-ksbxx" Jan 29 12:07:30 crc kubenswrapper[4840]: I0129 12:07:30.183641 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:07:30 crc kubenswrapper[4840]: I0129 12:07:30.183768 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:07:30 crc kubenswrapper[4840]: I0129 12:07:30.183783 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"f45f863522c946ad7ca7628ef8c2835757f95b1e3e736acb3a1201c8c39ad963"} pod="openshift-console/downloads-7954f5f757-ksbxx" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 29 12:07:30 crc kubenswrapper[4840]: I0129 12:07:30.183899 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" containerID="cri-o://f45f863522c946ad7ca7628ef8c2835757f95b1e3e736acb3a1201c8c39ad963" gracePeriod=2 Jan 29 12:07:30 crc kubenswrapper[4840]: I0129 12:07:30.205554 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:07:30 crc kubenswrapper[4840]: I0129 12:07:30.212462 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:07:31 crc kubenswrapper[4840]: I0129 12:07:31.380713 4840 generic.go:334] "Generic (PLEG): container finished" podID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerID="f45f863522c946ad7ca7628ef8c2835757f95b1e3e736acb3a1201c8c39ad963" exitCode=0 Jan 29 12:07:31 crc kubenswrapper[4840]: I0129 12:07:31.380814 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ksbxx" event={"ID":"df2082a0-52e3-4557-bdbf-9f5f654f00b4","Type":"ContainerDied","Data":"f45f863522c946ad7ca7628ef8c2835757f95b1e3e736acb3a1201c8c39ad963"} Jan 29 12:07:31 crc kubenswrapper[4840]: I0129 12:07:31.850612 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:07:36 crc kubenswrapper[4840]: I0129 12:07:36.754142 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 29 12:07:39 crc kubenswrapper[4840]: I0129 12:07:39.394463 4840 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bg94b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 29 12:07:39 crc kubenswrapper[4840]: I0129 12:07:39.396318 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" podUID="bc4198ee-d503-4b4d-81bf-f8ce73e9de18" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 29 12:07:39 crc kubenswrapper[4840]: I0129 12:07:39.963828 4840 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8pndg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Jan 29 12:07:39 crc kubenswrapper[4840]: I0129 12:07:39.963928 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" podUID="03b827e0-ff80-4021-9b29-72935f8fe30b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Jan 29 12:07:40 crc kubenswrapper[4840]: I0129 12:07:40.184610 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:07:40 crc kubenswrapper[4840]: I0129 12:07:40.185255 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:07:40 crc kubenswrapper[4840]: I0129 12:07:40.186689 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-9qw27" Jan 29 12:07:46 crc kubenswrapper[4840]: I0129 12:07:46.830099 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 12:07:46 crc kubenswrapper[4840]: E0129 12:07:46.832035 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be2ab516-81c5-4625-a36f-8fc4770daf66" containerName="pruner" Jan 29 12:07:46 crc kubenswrapper[4840]: I0129 12:07:46.832189 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="be2ab516-81c5-4625-a36f-8fc4770daf66" containerName="pruner" Jan 29 12:07:46 crc kubenswrapper[4840]: E0129 12:07:46.832265 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c3f64db-9739-4507-9f5b-99c96ab60ce4" containerName="pruner" Jan 29 12:07:46 crc kubenswrapper[4840]: I0129 12:07:46.832326 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c3f64db-9739-4507-9f5b-99c96ab60ce4" containerName="pruner" Jan 29 12:07:46 crc kubenswrapper[4840]: I0129 12:07:46.832482 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c3f64db-9739-4507-9f5b-99c96ab60ce4" containerName="pruner" Jan 29 12:07:46 crc kubenswrapper[4840]: I0129 12:07:46.832581 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="be2ab516-81c5-4625-a36f-8fc4770daf66" containerName="pruner" Jan 29 12:07:46 crc kubenswrapper[4840]: I0129 12:07:46.833078 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:07:46 crc kubenswrapper[4840]: I0129 12:07:46.837734 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 29 12:07:46 crc kubenswrapper[4840]: I0129 12:07:46.837977 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 29 12:07:46 crc kubenswrapper[4840]: I0129 12:07:46.848744 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 12:07:46 crc kubenswrapper[4840]: I0129 12:07:46.946406 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c50423ab-e2eb-4321-9089-c371791c36e5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c50423ab-e2eb-4321-9089-c371791c36e5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:07:46 crc kubenswrapper[4840]: I0129 12:07:46.946695 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c50423ab-e2eb-4321-9089-c371791c36e5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c50423ab-e2eb-4321-9089-c371791c36e5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:07:47 crc kubenswrapper[4840]: I0129 12:07:47.048734 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c50423ab-e2eb-4321-9089-c371791c36e5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c50423ab-e2eb-4321-9089-c371791c36e5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:07:47 crc kubenswrapper[4840]: I0129 12:07:47.048861 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c50423ab-e2eb-4321-9089-c371791c36e5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c50423ab-e2eb-4321-9089-c371791c36e5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:07:47 crc kubenswrapper[4840]: I0129 12:07:47.048938 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c50423ab-e2eb-4321-9089-c371791c36e5-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"c50423ab-e2eb-4321-9089-c371791c36e5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:07:47 crc kubenswrapper[4840]: I0129 12:07:47.074459 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c50423ab-e2eb-4321-9089-c371791c36e5-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"c50423ab-e2eb-4321-9089-c371791c36e5\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:07:47 crc kubenswrapper[4840]: I0129 12:07:47.160779 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.182096 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.182518 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.394102 4840 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bg94b container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.394208 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" podUID="bc4198ee-d503-4b4d-81bf-f8ce73e9de18" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.632895 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.636982 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.672480 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b7b7775-4hcgr"] Jan 29 12:07:50 crc kubenswrapper[4840]: E0129 12:07:50.674517 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4198ee-d503-4b4d-81bf-f8ce73e9de18" containerName="controller-manager" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.674546 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4198ee-d503-4b4d-81bf-f8ce73e9de18" containerName="controller-manager" Jan 29 12:07:50 crc kubenswrapper[4840]: E0129 12:07:50.674560 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b827e0-ff80-4021-9b29-72935f8fe30b" containerName="route-controller-manager" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.674567 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b827e0-ff80-4021-9b29-72935f8fe30b" containerName="route-controller-manager" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.674716 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b827e0-ff80-4021-9b29-72935f8fe30b" containerName="route-controller-manager" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.674733 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4198ee-d503-4b4d-81bf-f8ce73e9de18" containerName="controller-manager" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.675160 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.682930 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b7b7775-4hcgr"] Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.708672 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-config\") pod \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.708719 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-proxy-ca-bundles\") pod \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.708740 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b827e0-ff80-4021-9b29-72935f8fe30b-serving-cert\") pod \"03b827e0-ff80-4021-9b29-72935f8fe30b\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.708807 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqdsf\" (UniqueName: \"kubernetes.io/projected/03b827e0-ff80-4021-9b29-72935f8fe30b-kube-api-access-hqdsf\") pod \"03b827e0-ff80-4021-9b29-72935f8fe30b\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.708824 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-client-ca\") pod \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.708854 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b827e0-ff80-4021-9b29-72935f8fe30b-client-ca\") pod \"03b827e0-ff80-4021-9b29-72935f8fe30b\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.708894 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-serving-cert\") pod \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.708954 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b827e0-ff80-4021-9b29-72935f8fe30b-config\") pod \"03b827e0-ff80-4021-9b29-72935f8fe30b\" (UID: \"03b827e0-ff80-4021-9b29-72935f8fe30b\") " Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.708991 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fb75\" (UniqueName: \"kubernetes.io/projected/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-kube-api-access-6fb75\") pod \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\" (UID: \"bc4198ee-d503-4b4d-81bf-f8ce73e9de18\") " Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.709144 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-proxy-ca-bundles\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.709170 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-config\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.709201 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6q9x\" (UniqueName: \"kubernetes.io/projected/cfad4cd5-dce4-41ba-ae97-25eb36e42958-kube-api-access-r6q9x\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.709238 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-client-ca\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.709259 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfad4cd5-dce4-41ba-ae97-25eb36e42958-serving-cert\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.710434 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b827e0-ff80-4021-9b29-72935f8fe30b-client-ca" (OuterVolumeSpecName: "client-ca") pod "03b827e0-ff80-4021-9b29-72935f8fe30b" (UID: "03b827e0-ff80-4021-9b29-72935f8fe30b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.710559 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-client-ca" (OuterVolumeSpecName: "client-ca") pod "bc4198ee-d503-4b4d-81bf-f8ce73e9de18" (UID: "bc4198ee-d503-4b4d-81bf-f8ce73e9de18"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.710580 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03b827e0-ff80-4021-9b29-72935f8fe30b-config" (OuterVolumeSpecName: "config") pod "03b827e0-ff80-4021-9b29-72935f8fe30b" (UID: "03b827e0-ff80-4021-9b29-72935f8fe30b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.710650 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "bc4198ee-d503-4b4d-81bf-f8ce73e9de18" (UID: "bc4198ee-d503-4b4d-81bf-f8ce73e9de18"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.710745 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-config" (OuterVolumeSpecName: "config") pod "bc4198ee-d503-4b4d-81bf-f8ce73e9de18" (UID: "bc4198ee-d503-4b4d-81bf-f8ce73e9de18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.729128 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b827e0-ff80-4021-9b29-72935f8fe30b-kube-api-access-hqdsf" (OuterVolumeSpecName: "kube-api-access-hqdsf") pod "03b827e0-ff80-4021-9b29-72935f8fe30b" (UID: "03b827e0-ff80-4021-9b29-72935f8fe30b"). InnerVolumeSpecName "kube-api-access-hqdsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.733027 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc4198ee-d503-4b4d-81bf-f8ce73e9de18" (UID: "bc4198ee-d503-4b4d-81bf-f8ce73e9de18"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.736364 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-kube-api-access-6fb75" (OuterVolumeSpecName: "kube-api-access-6fb75") pod "bc4198ee-d503-4b4d-81bf-f8ce73e9de18" (UID: "bc4198ee-d503-4b4d-81bf-f8ce73e9de18"). InnerVolumeSpecName "kube-api-access-6fb75". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.741329 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03b827e0-ff80-4021-9b29-72935f8fe30b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "03b827e0-ff80-4021-9b29-72935f8fe30b" (UID: "03b827e0-ff80-4021-9b29-72935f8fe30b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810253 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6q9x\" (UniqueName: \"kubernetes.io/projected/cfad4cd5-dce4-41ba-ae97-25eb36e42958-kube-api-access-r6q9x\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810476 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-client-ca\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810496 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfad4cd5-dce4-41ba-ae97-25eb36e42958-serving-cert\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810564 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-proxy-ca-bundles\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810593 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-config\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810711 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810724 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03b827e0-ff80-4021-9b29-72935f8fe30b-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810734 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fb75\" (UniqueName: \"kubernetes.io/projected/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-kube-api-access-6fb75\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810745 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810754 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/03b827e0-ff80-4021-9b29-72935f8fe30b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810834 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810844 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqdsf\" (UniqueName: \"kubernetes.io/projected/03b827e0-ff80-4021-9b29-72935f8fe30b-kube-api-access-hqdsf\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810852 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bc4198ee-d503-4b4d-81bf-f8ce73e9de18-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.810860 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/03b827e0-ff80-4021-9b29-72935f8fe30b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.812201 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-client-ca\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.812290 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-config\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.813102 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-proxy-ca-bundles\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.815287 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfad4cd5-dce4-41ba-ae97-25eb36e42958-serving-cert\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.827067 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6q9x\" (UniqueName: \"kubernetes.io/projected/cfad4cd5-dce4-41ba-ae97-25eb36e42958-kube-api-access-r6q9x\") pod \"controller-manager-65b7b7775-4hcgr\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.962886 4840 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-8pndg container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 29 12:07:50 crc kubenswrapper[4840]: I0129 12:07:50.962988 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" podUID="03b827e0-ff80-4021-9b29-72935f8fe30b" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 29 12:07:51 crc kubenswrapper[4840]: I0129 12:07:51.005099 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:07:51 crc kubenswrapper[4840]: I0129 12:07:51.503530 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" event={"ID":"bc4198ee-d503-4b4d-81bf-f8ce73e9de18","Type":"ContainerDied","Data":"7fb7e1b949ee3b2720dadc6f8a93723fab80a6c9e82381e3352cd0e4bedfdb13"} Jan 29 12:07:51 crc kubenswrapper[4840]: I0129 12:07:51.503598 4840 scope.go:117] "RemoveContainer" containerID="24d46f7df412f907f5e34ebd46430c8c30373de0baaaf6723cfcf8f1ab6be320" Jan 29 12:07:51 crc kubenswrapper[4840]: I0129 12:07:51.503627 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bg94b" Jan 29 12:07:51 crc kubenswrapper[4840]: I0129 12:07:51.505333 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" event={"ID":"03b827e0-ff80-4021-9b29-72935f8fe30b","Type":"ContainerDied","Data":"1896ca7532b53da1d380e91ccab8a032e9d949aca6fc91f5b9d14e266aaeb665"} Jan 29 12:07:51 crc kubenswrapper[4840]: I0129 12:07:51.505525 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg" Jan 29 12:07:51 crc kubenswrapper[4840]: I0129 12:07:51.532230 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg"] Jan 29 12:07:51 crc kubenswrapper[4840]: I0129 12:07:51.541979 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-8pndg"] Jan 29 12:07:51 crc kubenswrapper[4840]: I0129 12:07:51.547607 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bg94b"] Jan 29 12:07:51 crc kubenswrapper[4840]: I0129 12:07:51.552210 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bg94b"] Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.436515 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.437636 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.439962 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 12:07:52 crc kubenswrapper[4840]: E0129 12:07:52.479125 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 12:07:52 crc kubenswrapper[4840]: E0129 12:07:52.479461 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6rqz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-r6w65_openshift-marketplace(12410648-0772-40f7-9261-107634802711): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:07:52 crc kubenswrapper[4840]: E0129 12:07:52.480874 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-r6w65" podUID="12410648-0772-40f7-9261-107634802711" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.539379 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3262084-4755-4d63-b056-ea1a16cbd8e4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f3262084-4755-4d63-b056-ea1a16cbd8e4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.539452 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f3262084-4755-4d63-b056-ea1a16cbd8e4-var-lock\") pod \"installer-9-crc\" (UID: \"f3262084-4755-4d63-b056-ea1a16cbd8e4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.539491 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3262084-4755-4d63-b056-ea1a16cbd8e4-kube-api-access\") pod \"installer-9-crc\" (UID: \"f3262084-4755-4d63-b056-ea1a16cbd8e4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.641793 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3262084-4755-4d63-b056-ea1a16cbd8e4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f3262084-4755-4d63-b056-ea1a16cbd8e4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.641861 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f3262084-4755-4d63-b056-ea1a16cbd8e4-var-lock\") pod \"installer-9-crc\" (UID: \"f3262084-4755-4d63-b056-ea1a16cbd8e4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.641892 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3262084-4755-4d63-b056-ea1a16cbd8e4-kube-api-access\") pod \"installer-9-crc\" (UID: \"f3262084-4755-4d63-b056-ea1a16cbd8e4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.641961 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3262084-4755-4d63-b056-ea1a16cbd8e4-kubelet-dir\") pod \"installer-9-crc\" (UID: \"f3262084-4755-4d63-b056-ea1a16cbd8e4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.642014 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f3262084-4755-4d63-b056-ea1a16cbd8e4-var-lock\") pod \"installer-9-crc\" (UID: \"f3262084-4755-4d63-b056-ea1a16cbd8e4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.665843 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3262084-4755-4d63-b056-ea1a16cbd8e4-kube-api-access\") pod \"installer-9-crc\" (UID: \"f3262084-4755-4d63-b056-ea1a16cbd8e4\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.760237 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.959694 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb"] Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.961214 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.964539 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.965397 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.965551 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.965760 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.966069 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.966228 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 12:07:52 crc kubenswrapper[4840]: I0129 12:07:52.975727 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb"] Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.022806 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b827e0-ff80-4021-9b29-72935f8fe30b" path="/var/lib/kubelet/pods/03b827e0-ff80-4021-9b29-72935f8fe30b/volumes" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.024217 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc4198ee-d503-4b4d-81bf-f8ce73e9de18" path="/var/lib/kubelet/pods/bc4198ee-d503-4b4d-81bf-f8ce73e9de18/volumes" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.047081 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de0f8559-03e7-4b11-92ab-5063d5b32880-client-ca\") pod \"route-controller-manager-588b4d9577-ckjgb\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.047529 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de0f8559-03e7-4b11-92ab-5063d5b32880-serving-cert\") pod \"route-controller-manager-588b4d9577-ckjgb\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.047821 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0f8559-03e7-4b11-92ab-5063d5b32880-config\") pod \"route-controller-manager-588b4d9577-ckjgb\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.048001 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hq5rd\" (UniqueName: \"kubernetes.io/projected/de0f8559-03e7-4b11-92ab-5063d5b32880-kube-api-access-hq5rd\") pod \"route-controller-manager-588b4d9577-ckjgb\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.149611 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de0f8559-03e7-4b11-92ab-5063d5b32880-client-ca\") pod \"route-controller-manager-588b4d9577-ckjgb\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.149667 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de0f8559-03e7-4b11-92ab-5063d5b32880-serving-cert\") pod \"route-controller-manager-588b4d9577-ckjgb\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.149701 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0f8559-03e7-4b11-92ab-5063d5b32880-config\") pod \"route-controller-manager-588b4d9577-ckjgb\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.150067 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hq5rd\" (UniqueName: \"kubernetes.io/projected/de0f8559-03e7-4b11-92ab-5063d5b32880-kube-api-access-hq5rd\") pod \"route-controller-manager-588b4d9577-ckjgb\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.151289 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de0f8559-03e7-4b11-92ab-5063d5b32880-client-ca\") pod \"route-controller-manager-588b4d9577-ckjgb\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.153215 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0f8559-03e7-4b11-92ab-5063d5b32880-config\") pod \"route-controller-manager-588b4d9577-ckjgb\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.164685 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de0f8559-03e7-4b11-92ab-5063d5b32880-serving-cert\") pod \"route-controller-manager-588b4d9577-ckjgb\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.168932 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hq5rd\" (UniqueName: \"kubernetes.io/projected/de0f8559-03e7-4b11-92ab-5063d5b32880-kube-api-access-hq5rd\") pod \"route-controller-manager-588b4d9577-ckjgb\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.315543 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.522377 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.522482 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.522580 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.524235 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b"} pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:07:53 crc kubenswrapper[4840]: I0129 12:07:53.524320 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" containerID="cri-o://009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b" gracePeriod=600 Jan 29 12:07:54 crc kubenswrapper[4840]: E0129 12:07:54.229046 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-r6w65" podUID="12410648-0772-40f7-9261-107634802711" Jan 29 12:07:54 crc kubenswrapper[4840]: E0129 12:07:54.353733 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 12:07:54 crc kubenswrapper[4840]: E0129 12:07:54.354129 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cdj8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-486ss_openshift-marketplace(3f137467-8040-45d5-bfa1-89860498eb85): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:07:54 crc kubenswrapper[4840]: E0129 12:07:54.355322 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-486ss" podUID="3f137467-8040-45d5-bfa1-89860498eb85" Jan 29 12:07:54 crc kubenswrapper[4840]: I0129 12:07:54.541208 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerID="009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b" exitCode=0 Jan 29 12:07:54 crc kubenswrapper[4840]: I0129 12:07:54.541303 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerDied","Data":"009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b"} Jan 29 12:07:55 crc kubenswrapper[4840]: E0129 12:07:55.583092 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-486ss" podUID="3f137467-8040-45d5-bfa1-89860498eb85" Jan 29 12:07:55 crc kubenswrapper[4840]: E0129 12:07:55.671190 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 12:07:55 crc kubenswrapper[4840]: E0129 12:07:55.671376 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnck2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sd7t2_openshift-marketplace(93c0bb8f-812d-4bf4-93c4-7ccd557d326d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:07:55 crc kubenswrapper[4840]: E0129 12:07:55.672617 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sd7t2" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" Jan 29 12:07:55 crc kubenswrapper[4840]: E0129 12:07:55.713204 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 29 12:07:55 crc kubenswrapper[4840]: E0129 12:07:55.714264 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnb5p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-c2k7s_openshift-marketplace(f8720503-5456-4934-a5a9-d58d6eeeb0a6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:07:55 crc kubenswrapper[4840]: E0129 12:07:55.716405 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-c2k7s" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" Jan 29 12:07:56 crc kubenswrapper[4840]: I0129 12:07:56.079262 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-thmqb"] Jan 29 12:08:00 crc kubenswrapper[4840]: I0129 12:08:00.183088 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:08:00 crc kubenswrapper[4840]: I0129 12:08:00.184134 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.269726 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sd7t2" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.269773 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-c2k7s" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" Jan 29 12:08:00 crc kubenswrapper[4840]: I0129 12:08:00.310480 4840 scope.go:117] "RemoveContainer" containerID="d3d61c2960c16fd52a16e18f9f0cad71db5161440ec99088d64c96feaf059fd0" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.562669 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.563799 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6jznq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pqht4_openshift-marketplace(a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.565068 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pqht4" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" Jan 29 12:08:00 crc kubenswrapper[4840]: I0129 12:08:00.581175 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-ksbxx" event={"ID":"df2082a0-52e3-4557-bdbf-9f5f654f00b4","Type":"ContainerStarted","Data":"80ac28feb872a62ff0e32d5253d81ef9b8cc1a011430f23c68ae431a9bfe900a"} Jan 29 12:08:00 crc kubenswrapper[4840]: I0129 12:08:00.581567 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-ksbxx" Jan 29 12:08:00 crc kubenswrapper[4840]: I0129 12:08:00.582126 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:08:00 crc kubenswrapper[4840]: I0129 12:08:00.582172 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.593550 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pqht4" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.695001 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.695169 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fvfk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jj2gn_openshift-marketplace(dd2c47ef-59b6-409e-8091-2e6f20eb89cb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.696892 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jj2gn" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" Jan 29 12:08:00 crc kubenswrapper[4840]: I0129 12:08:00.754369 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb"] Jan 29 12:08:00 crc kubenswrapper[4840]: W0129 12:08:00.762723 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde0f8559_03e7_4b11_92ab_5063d5b32880.slice/crio-8253efe0d1e2530e2aa003822167c780ae42e754a044b725c6b5a9d6a0d21f24 WatchSource:0}: Error finding container 8253efe0d1e2530e2aa003822167c780ae42e754a044b725c6b5a9d6a0d21f24: Status 404 returned error can't find the container with id 8253efe0d1e2530e2aa003822167c780ae42e754a044b725c6b5a9d6a0d21f24 Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.804709 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.805212 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z6dpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tnxg7_openshift-marketplace(dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.806746 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tnxg7" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" Jan 29 12:08:00 crc kubenswrapper[4840]: I0129 12:08:00.832004 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 29 12:08:00 crc kubenswrapper[4840]: I0129 12:08:00.832305 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 29 12:08:00 crc kubenswrapper[4840]: I0129 12:08:00.851541 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b7b7775-4hcgr"] Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.916106 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.916408 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kndxl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vgg49_openshift-marketplace(fcb94932-ec4e-48f5-bda1-51aa89a53087): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:08:00 crc kubenswrapper[4840]: E0129 12:08:00.917608 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vgg49" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.589900 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" event={"ID":"cfad4cd5-dce4-41ba-ae97-25eb36e42958","Type":"ContainerStarted","Data":"f41e3c2722c70a05b540c8d037ae22eb6de72e242c30f4a37c2a149060a7a884"} Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.590511 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.590528 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" event={"ID":"cfad4cd5-dce4-41ba-ae97-25eb36e42958","Type":"ContainerStarted","Data":"b5ad49f8e3d9703e1e7d791e87d08115b3da563ddaef09a8f5b73c2d3ff79f48"} Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.593269 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" event={"ID":"de0f8559-03e7-4b11-92ab-5063d5b32880","Type":"ContainerStarted","Data":"a863a5eb3314013831b315f24fa3be90414ae3c7f180d853a60bf65a2b48a5b3"} Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.593303 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" event={"ID":"de0f8559-03e7-4b11-92ab-5063d5b32880","Type":"ContainerStarted","Data":"8253efe0d1e2530e2aa003822167c780ae42e754a044b725c6b5a9d6a0d21f24"} Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.594312 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.596233 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f3262084-4755-4d63-b056-ea1a16cbd8e4","Type":"ContainerStarted","Data":"5d7396de3c7888e4a56fe6ca40e57ab8e9c7fe1bfafac4878e56f516b941c939"} Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.596261 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f3262084-4755-4d63-b056-ea1a16cbd8e4","Type":"ContainerStarted","Data":"4b9c3151e506792173ea3675dd4b7fd08097f0a9ff40e3cdd1cf89b4a26b9e80"} Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.599095 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"5ff863e72d8fdd89fd574ba3941e895f5b60a6ec9ccb97dd5b92e67bce246615"} Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.605674 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c50423ab-e2eb-4321-9089-c371791c36e5","Type":"ContainerStarted","Data":"8fcf8bb2b6451c92d652ea9684cd40a2cfb5d3c553e710002046d840d9be3a3e"} Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.605708 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c50423ab-e2eb-4321-9089-c371791c36e5","Type":"ContainerStarted","Data":"c616d82aafeecb63ed9bb43f4011f33be071e805935ccff2952cb0ba007fad77"} Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.607220 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.607256 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.609351 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:08:01 crc kubenswrapper[4840]: E0129 12:08:01.611470 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vgg49" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" Jan 29 12:08:01 crc kubenswrapper[4840]: E0129 12:08:01.611511 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tnxg7" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" Jan 29 12:08:01 crc kubenswrapper[4840]: E0129 12:08:01.611572 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jj2gn" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.615833 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.630242 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" podStartSLOduration=15.630223319 podStartE2EDuration="15.630223319s" podCreationTimestamp="2026-01-29 12:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:01.625584044 +0000 UTC m=+213.288563937" watchObservedRunningTime="2026-01-29 12:08:01.630223319 +0000 UTC m=+213.293203212" Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.684866 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=9.68484625 podStartE2EDuration="9.68484625s" podCreationTimestamp="2026-01-29 12:07:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:01.679704138 +0000 UTC m=+213.342684041" watchObservedRunningTime="2026-01-29 12:08:01.68484625 +0000 UTC m=+213.347826143" Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.740633 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=15.740610377 podStartE2EDuration="15.740610377s" podCreationTimestamp="2026-01-29 12:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:01.707502134 +0000 UTC m=+213.370482027" watchObservedRunningTime="2026-01-29 12:08:01.740610377 +0000 UTC m=+213.403590270" Jan 29 12:08:01 crc kubenswrapper[4840]: I0129 12:08:01.742227 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" podStartSLOduration=15.742216747 podStartE2EDuration="15.742216747s" podCreationTimestamp="2026-01-29 12:07:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:01.739371888 +0000 UTC m=+213.402351791" watchObservedRunningTime="2026-01-29 12:08:01.742216747 +0000 UTC m=+213.405196640" Jan 29 12:08:03 crc kubenswrapper[4840]: I0129 12:08:03.620730 4840 generic.go:334] "Generic (PLEG): container finished" podID="c50423ab-e2eb-4321-9089-c371791c36e5" containerID="8fcf8bb2b6451c92d652ea9684cd40a2cfb5d3c553e710002046d840d9be3a3e" exitCode=0 Jan 29 12:08:03 crc kubenswrapper[4840]: I0129 12:08:03.622853 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c50423ab-e2eb-4321-9089-c371791c36e5","Type":"ContainerDied","Data":"8fcf8bb2b6451c92d652ea9684cd40a2cfb5d3c553e710002046d840d9be3a3e"} Jan 29 12:08:05 crc kubenswrapper[4840]: I0129 12:08:05.005376 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:08:05 crc kubenswrapper[4840]: I0129 12:08:05.118037 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c50423ab-e2eb-4321-9089-c371791c36e5-kube-api-access\") pod \"c50423ab-e2eb-4321-9089-c371791c36e5\" (UID: \"c50423ab-e2eb-4321-9089-c371791c36e5\") " Jan 29 12:08:05 crc kubenswrapper[4840]: I0129 12:08:05.118339 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c50423ab-e2eb-4321-9089-c371791c36e5-kubelet-dir\") pod \"c50423ab-e2eb-4321-9089-c371791c36e5\" (UID: \"c50423ab-e2eb-4321-9089-c371791c36e5\") " Jan 29 12:08:05 crc kubenswrapper[4840]: I0129 12:08:05.118505 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c50423ab-e2eb-4321-9089-c371791c36e5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c50423ab-e2eb-4321-9089-c371791c36e5" (UID: "c50423ab-e2eb-4321-9089-c371791c36e5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:08:05 crc kubenswrapper[4840]: I0129 12:08:05.118795 4840 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c50423ab-e2eb-4321-9089-c371791c36e5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:05 crc kubenswrapper[4840]: I0129 12:08:05.125597 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c50423ab-e2eb-4321-9089-c371791c36e5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c50423ab-e2eb-4321-9089-c371791c36e5" (UID: "c50423ab-e2eb-4321-9089-c371791c36e5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:08:05 crc kubenswrapper[4840]: I0129 12:08:05.220585 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c50423ab-e2eb-4321-9089-c371791c36e5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:05 crc kubenswrapper[4840]: I0129 12:08:05.635097 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"c50423ab-e2eb-4321-9089-c371791c36e5","Type":"ContainerDied","Data":"c616d82aafeecb63ed9bb43f4011f33be071e805935ccff2952cb0ba007fad77"} Jan 29 12:08:05 crc kubenswrapper[4840]: I0129 12:08:05.635409 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c616d82aafeecb63ed9bb43f4011f33be071e805935ccff2952cb0ba007fad77" Jan 29 12:08:05 crc kubenswrapper[4840]: I0129 12:08:05.635150 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 29 12:08:09 crc kubenswrapper[4840]: E0129 12:08:09.126857 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 29 12:08:09 crc kubenswrapper[4840]: E0129 12:08:09.127262 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cdj8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-486ss_openshift-marketplace(3f137467-8040-45d5-bfa1-89860498eb85): ErrImagePull: initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 12:08:09 crc kubenswrapper[4840]: E0129 12:08:09.128561 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/community-operator-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/community-operators-486ss" podUID="3f137467-8040-45d5-bfa1-89860498eb85" Jan 29 12:08:10 crc kubenswrapper[4840]: I0129 12:08:10.182811 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:08:10 crc kubenswrapper[4840]: I0129 12:08:10.182902 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:08:10 crc kubenswrapper[4840]: I0129 12:08:10.183312 4840 patch_prober.go:28] interesting pod/downloads-7954f5f757-ksbxx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 29 12:08:10 crc kubenswrapper[4840]: I0129 12:08:10.183422 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-ksbxx" podUID="df2082a0-52e3-4557-bdbf-9f5f654f00b4" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 29 12:08:20 crc kubenswrapper[4840]: I0129 12:08:20.192162 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-ksbxx" Jan 29 12:08:21 crc kubenswrapper[4840]: I0129 12:08:21.113468 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" podUID="b7da4209-9141-464e-a112-51470c4785d6" containerName="oauth-openshift" containerID="cri-o://4cad9999d0d5c88d0454dfb30ed55d43e9560adafdb59e2df034472e5f36935d" gracePeriod=15 Jan 29 12:08:22 crc kubenswrapper[4840]: I0129 12:08:22.737286 4840 generic.go:334] "Generic (PLEG): container finished" podID="b7da4209-9141-464e-a112-51470c4785d6" containerID="4cad9999d0d5c88d0454dfb30ed55d43e9560adafdb59e2df034472e5f36935d" exitCode=0 Jan 29 12:08:22 crc kubenswrapper[4840]: I0129 12:08:22.737375 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" event={"ID":"b7da4209-9141-464e-a112-51470c4785d6","Type":"ContainerDied","Data":"4cad9999d0d5c88d0454dfb30ed55d43e9560adafdb59e2df034472e5f36935d"} Jan 29 12:08:25 crc kubenswrapper[4840]: E0129 12:08:25.064187 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-486ss" podUID="3f137467-8040-45d5-bfa1-89860498eb85" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.559533 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.627388 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-b77dbf775-q975m"] Jan 29 12:08:25 crc kubenswrapper[4840]: E0129 12:08:25.627763 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50423ab-e2eb-4321-9089-c371791c36e5" containerName="pruner" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.627786 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50423ab-e2eb-4321-9089-c371791c36e5" containerName="pruner" Jan 29 12:08:25 crc kubenswrapper[4840]: E0129 12:08:25.627812 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7da4209-9141-464e-a112-51470c4785d6" containerName="oauth-openshift" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.627821 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7da4209-9141-464e-a112-51470c4785d6" containerName="oauth-openshift" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.627932 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c50423ab-e2eb-4321-9089-c371791c36e5" containerName="pruner" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.627963 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7da4209-9141-464e-a112-51470c4785d6" containerName="oauth-openshift" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.628739 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.631074 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-b77dbf775-q975m"] Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.715564 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-audit-policies\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.717239 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-idp-0-file-data\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.718194 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbksv\" (UniqueName: \"kubernetes.io/projected/b7da4209-9141-464e-a112-51470c4785d6-kube-api-access-vbksv\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.718274 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-trusted-ca-bundle\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.718304 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-error\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.718329 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-service-ca\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.718386 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-serving-cert\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.718434 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7da4209-9141-464e-a112-51470c4785d6-audit-dir\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.718454 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-router-certs\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.718502 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-cliconfig\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.718534 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-ocp-branding-template\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.718574 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-login\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.718633 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-provider-selection\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.718687 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-session\") pod \"b7da4209-9141-464e-a112-51470c4785d6\" (UID: \"b7da4209-9141-464e-a112-51470c4785d6\") " Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.717174 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.719479 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7da4209-9141-464e-a112-51470c4785d6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.719749 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.720309 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.720452 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.725654 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.726053 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.726073 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7da4209-9141-464e-a112-51470c4785d6-kube-api-access-vbksv" (OuterVolumeSpecName: "kube-api-access-vbksv") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "kube-api-access-vbksv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.726485 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.726529 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.726670 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.727224 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.727222 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.727498 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b7da4209-9141-464e-a112-51470c4785d6" (UID: "b7da4209-9141-464e-a112-51470c4785d6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.755234 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" event={"ID":"b7da4209-9141-464e-a112-51470c4785d6","Type":"ContainerDied","Data":"10d88972322b879da31e64de14d6e898d1936a52d205d57fb634d5d512956113"} Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.755320 4840 scope.go:117] "RemoveContainer" containerID="4cad9999d0d5c88d0454dfb30ed55d43e9560adafdb59e2df034472e5f36935d" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.755775 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-thmqb" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.806718 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-thmqb"] Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.811248 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-thmqb"] Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.820578 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.821076 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1260554c-a53f-46f9-b367-a8848e1911dd-audit-policies\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.821193 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-service-ca\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.821362 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.821476 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1260554c-a53f-46f9-b367-a8848e1911dd-audit-dir\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.821610 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.821721 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-user-template-error\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.821817 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.821898 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-user-template-login\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822092 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-router-certs\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822156 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2ckp\" (UniqueName: \"kubernetes.io/projected/1260554c-a53f-46f9-b367-a8848e1911dd-kube-api-access-d2ckp\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822265 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-session\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822299 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822324 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822373 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbksv\" (UniqueName: \"kubernetes.io/projected/b7da4209-9141-464e-a112-51470c4785d6-kube-api-access-vbksv\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822387 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822400 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822411 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822425 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822439 4840 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b7da4209-9141-464e-a112-51470c4785d6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822451 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822461 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822471 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822481 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822493 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822503 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822512 4840 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b7da4209-9141-464e-a112-51470c4785d6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.822522 4840 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b7da4209-9141-464e-a112-51470c4785d6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.924072 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-session\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.924624 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.924866 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.925026 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.925147 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1260554c-a53f-46f9-b367-a8848e1911dd-audit-policies\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.925284 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-service-ca\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.925506 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.925649 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1260554c-a53f-46f9-b367-a8848e1911dd-audit-dir\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.925740 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1260554c-a53f-46f9-b367-a8848e1911dd-audit-dir\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.925765 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.925896 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-user-template-error\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.925963 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.925990 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-user-template-login\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.926035 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-router-certs\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.926062 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2ckp\" (UniqueName: \"kubernetes.io/projected/1260554c-a53f-46f9-b367-a8848e1911dd-kube-api-access-d2ckp\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.926276 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1260554c-a53f-46f9-b367-a8848e1911dd-audit-policies\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.926987 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-service-ca\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.928240 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.929734 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-router-certs\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.930070 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.930153 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.930251 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-session\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.931145 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-user-template-error\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.931553 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.931686 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.931908 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-user-template-login\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:25 crc kubenswrapper[4840]: I0129 12:08:25.944557 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2ckp\" (UniqueName: \"kubernetes.io/projected/1260554c-a53f-46f9-b367-a8848e1911dd-kube-api-access-d2ckp\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:26 crc kubenswrapper[4840]: I0129 12:08:26.098638 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b7b7775-4hcgr"] Jan 29 12:08:26 crc kubenswrapper[4840]: I0129 12:08:26.098891 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" podUID="cfad4cd5-dce4-41ba-ae97-25eb36e42958" containerName="controller-manager" containerID="cri-o://f41e3c2722c70a05b540c8d037ae22eb6de72e242c30f4a37c2a149060a7a884" gracePeriod=30 Jan 29 12:08:26 crc kubenswrapper[4840]: I0129 12:08:26.221914 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb"] Jan 29 12:08:26 crc kubenswrapper[4840]: I0129 12:08:26.222139 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" podUID="de0f8559-03e7-4b11-92ab-5063d5b32880" containerName="route-controller-manager" containerID="cri-o://a863a5eb3314013831b315f24fa3be90414ae3c7f180d853a60bf65a2b48a5b3" gracePeriod=30 Jan 29 12:08:26 crc kubenswrapper[4840]: I0129 12:08:26.361762 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1260554c-a53f-46f9-b367-a8848e1911dd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b77dbf775-q975m\" (UID: \"1260554c-a53f-46f9-b367-a8848e1911dd\") " pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:26 crc kubenswrapper[4840]: I0129 12:08:26.552686 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:27 crc kubenswrapper[4840]: I0129 12:08:26.999742 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-b77dbf775-q975m"] Jan 29 12:08:27 crc kubenswrapper[4840]: W0129 12:08:27.000240 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1260554c_a53f_46f9_b367_a8848e1911dd.slice/crio-4f10955cf2ed90f9af0e174f1f1143e5f501f4a1536a5719b3d004304d278849 WatchSource:0}: Error finding container 4f10955cf2ed90f9af0e174f1f1143e5f501f4a1536a5719b3d004304d278849: Status 404 returned error can't find the container with id 4f10955cf2ed90f9af0e174f1f1143e5f501f4a1536a5719b3d004304d278849 Jan 29 12:08:27 crc kubenswrapper[4840]: I0129 12:08:27.008905 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7da4209-9141-464e-a112-51470c4785d6" path="/var/lib/kubelet/pods/b7da4209-9141-464e-a112-51470c4785d6/volumes" Jan 29 12:08:27 crc kubenswrapper[4840]: I0129 12:08:27.782441 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" event={"ID":"1260554c-a53f-46f9-b367-a8848e1911dd","Type":"ContainerStarted","Data":"4f10955cf2ed90f9af0e174f1f1143e5f501f4a1536a5719b3d004304d278849"} Jan 29 12:08:27 crc kubenswrapper[4840]: I0129 12:08:27.784762 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd7t2" event={"ID":"93c0bb8f-812d-4bf4-93c4-7ccd557d326d","Type":"ContainerStarted","Data":"5b4612481dc01e969ca63e7ee7a274eb041aab3f297e28b4e2b61b7366e2dbab"} Jan 29 12:08:27 crc kubenswrapper[4840]: I0129 12:08:27.787172 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2k7s" event={"ID":"f8720503-5456-4934-a5a9-d58d6eeeb0a6","Type":"ContainerStarted","Data":"5a19843e458896efa2a0f73449f30f54f472cda8c520ddf6a459fb6e4299d134"} Jan 29 12:08:27 crc kubenswrapper[4840]: I0129 12:08:27.790518 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6w65" event={"ID":"12410648-0772-40f7-9261-107634802711","Type":"ContainerStarted","Data":"273f1a79cacb530bed367a770dbddabdfd026eada616e15af085e7a144cc4bf3"} Jan 29 12:08:28 crc kubenswrapper[4840]: I0129 12:08:28.798249 4840 generic.go:334] "Generic (PLEG): container finished" podID="de0f8559-03e7-4b11-92ab-5063d5b32880" containerID="a863a5eb3314013831b315f24fa3be90414ae3c7f180d853a60bf65a2b48a5b3" exitCode=0 Jan 29 12:08:28 crc kubenswrapper[4840]: I0129 12:08:28.798353 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" event={"ID":"de0f8559-03e7-4b11-92ab-5063d5b32880","Type":"ContainerDied","Data":"a863a5eb3314013831b315f24fa3be90414ae3c7f180d853a60bf65a2b48a5b3"} Jan 29 12:08:28 crc kubenswrapper[4840]: I0129 12:08:28.800477 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" event={"ID":"1260554c-a53f-46f9-b367-a8848e1911dd","Type":"ContainerStarted","Data":"bc92b1bd9987db90ba6756ac66e18ea7ffba90b409d6a527d2acb9fe0e5a41ad"} Jan 29 12:08:28 crc kubenswrapper[4840]: I0129 12:08:28.802646 4840 generic.go:334] "Generic (PLEG): container finished" podID="cfad4cd5-dce4-41ba-ae97-25eb36e42958" containerID="f41e3c2722c70a05b540c8d037ae22eb6de72e242c30f4a37c2a149060a7a884" exitCode=0 Jan 29 12:08:28 crc kubenswrapper[4840]: I0129 12:08:28.802677 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" event={"ID":"cfad4cd5-dce4-41ba-ae97-25eb36e42958","Type":"ContainerDied","Data":"f41e3c2722c70a05b540c8d037ae22eb6de72e242c30f4a37c2a149060a7a884"} Jan 29 12:08:29 crc kubenswrapper[4840]: I0129 12:08:29.810967 4840 generic.go:334] "Generic (PLEG): container finished" podID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" containerID="5a19843e458896efa2a0f73449f30f54f472cda8c520ddf6a459fb6e4299d134" exitCode=0 Jan 29 12:08:29 crc kubenswrapper[4840]: I0129 12:08:29.811281 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2k7s" event={"ID":"f8720503-5456-4934-a5a9-d58d6eeeb0a6","Type":"ContainerDied","Data":"5a19843e458896efa2a0f73449f30f54f472cda8c520ddf6a459fb6e4299d134"} Jan 29 12:08:29 crc kubenswrapper[4840]: I0129 12:08:29.812796 4840 generic.go:334] "Generic (PLEG): container finished" podID="12410648-0772-40f7-9261-107634802711" containerID="273f1a79cacb530bed367a770dbddabdfd026eada616e15af085e7a144cc4bf3" exitCode=0 Jan 29 12:08:29 crc kubenswrapper[4840]: I0129 12:08:29.812859 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6w65" event={"ID":"12410648-0772-40f7-9261-107634802711","Type":"ContainerDied","Data":"273f1a79cacb530bed367a770dbddabdfd026eada616e15af085e7a144cc4bf3"} Jan 29 12:08:29 crc kubenswrapper[4840]: I0129 12:08:29.814328 4840 generic.go:334] "Generic (PLEG): container finished" podID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" containerID="5b4612481dc01e969ca63e7ee7a274eb041aab3f297e28b4e2b61b7366e2dbab" exitCode=0 Jan 29 12:08:29 crc kubenswrapper[4840]: I0129 12:08:29.814362 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd7t2" event={"ID":"93c0bb8f-812d-4bf4-93c4-7ccd557d326d","Type":"ContainerDied","Data":"5b4612481dc01e969ca63e7ee7a274eb041aab3f297e28b4e2b61b7366e2dbab"} Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.006924 4840 patch_prober.go:28] interesting pod/controller-manager-65b7b7775-4hcgr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.007056 4840 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" podUID="cfad4cd5-dce4-41ba-ae97-25eb36e42958" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.719568 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.757052 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k"] Jan 29 12:08:31 crc kubenswrapper[4840]: E0129 12:08:31.757350 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfad4cd5-dce4-41ba-ae97-25eb36e42958" containerName="controller-manager" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.757368 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfad4cd5-dce4-41ba-ae97-25eb36e42958" containerName="controller-manager" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.757481 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfad4cd5-dce4-41ba-ae97-25eb36e42958" containerName="controller-manager" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.758058 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.781010 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k"] Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.826590 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" event={"ID":"cfad4cd5-dce4-41ba-ae97-25eb36e42958","Type":"ContainerDied","Data":"b5ad49f8e3d9703e1e7d791e87d08115b3da563ddaef09a8f5b73c2d3ff79f48"} Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.826655 4840 scope.go:117] "RemoveContainer" containerID="f41e3c2722c70a05b540c8d037ae22eb6de72e242c30f4a37c2a149060a7a884" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.826703 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b7b7775-4hcgr" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.878793 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" podStartSLOduration=35.878763145 podStartE2EDuration="35.878763145s" podCreationTimestamp="2026-01-29 12:07:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:31.875437239 +0000 UTC m=+243.538417152" watchObservedRunningTime="2026-01-29 12:08:31.878763145 +0000 UTC m=+243.541743038" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.913150 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-proxy-ca-bundles\") pod \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.913241 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-config\") pod \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.913283 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfad4cd5-dce4-41ba-ae97-25eb36e42958-serving-cert\") pod \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.913420 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-client-ca\") pod \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.913480 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6q9x\" (UniqueName: \"kubernetes.io/projected/cfad4cd5-dce4-41ba-ae97-25eb36e42958-kube-api-access-r6q9x\") pod \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\" (UID: \"cfad4cd5-dce4-41ba-ae97-25eb36e42958\") " Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.913678 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp4p9\" (UniqueName: \"kubernetes.io/projected/0e647bfe-fa82-470d-9e53-4a0046d446d7-kube-api-access-vp4p9\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.913727 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-proxy-ca-bundles\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.913755 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-config\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.913780 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e647bfe-fa82-470d-9e53-4a0046d446d7-serving-cert\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.913823 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-client-ca\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.915275 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-client-ca" (OuterVolumeSpecName: "client-ca") pod "cfad4cd5-dce4-41ba-ae97-25eb36e42958" (UID: "cfad4cd5-dce4-41ba-ae97-25eb36e42958"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.915371 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cfad4cd5-dce4-41ba-ae97-25eb36e42958" (UID: "cfad4cd5-dce4-41ba-ae97-25eb36e42958"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.915413 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-config" (OuterVolumeSpecName: "config") pod "cfad4cd5-dce4-41ba-ae97-25eb36e42958" (UID: "cfad4cd5-dce4-41ba-ae97-25eb36e42958"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.924501 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cfad4cd5-dce4-41ba-ae97-25eb36e42958-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cfad4cd5-dce4-41ba-ae97-25eb36e42958" (UID: "cfad4cd5-dce4-41ba-ae97-25eb36e42958"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:08:31 crc kubenswrapper[4840]: I0129 12:08:31.924542 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfad4cd5-dce4-41ba-ae97-25eb36e42958-kube-api-access-r6q9x" (OuterVolumeSpecName: "kube-api-access-r6q9x") pod "cfad4cd5-dce4-41ba-ae97-25eb36e42958" (UID: "cfad4cd5-dce4-41ba-ae97-25eb36e42958"). InnerVolumeSpecName "kube-api-access-r6q9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.018085 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-config\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.018307 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e647bfe-fa82-470d-9e53-4a0046d446d7-serving-cert\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.018347 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-client-ca\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.018470 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp4p9\" (UniqueName: \"kubernetes.io/projected/0e647bfe-fa82-470d-9e53-4a0046d446d7-kube-api-access-vp4p9\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.018498 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-proxy-ca-bundles\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.018536 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.018548 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6q9x\" (UniqueName: \"kubernetes.io/projected/cfad4cd5-dce4-41ba-ae97-25eb36e42958-kube-api-access-r6q9x\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.018559 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.018569 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cfad4cd5-dce4-41ba-ae97-25eb36e42958-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.018579 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cfad4cd5-dce4-41ba-ae97-25eb36e42958-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.020333 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-proxy-ca-bundles\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.021397 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-client-ca\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.023204 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-config\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.027214 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e647bfe-fa82-470d-9e53-4a0046d446d7-serving-cert\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.042716 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp4p9\" (UniqueName: \"kubernetes.io/projected/0e647bfe-fa82-470d-9e53-4a0046d446d7-kube-api-access-vp4p9\") pod \"controller-manager-8bb9bcff4-vzt7k\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.087933 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.170574 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b7b7775-4hcgr"] Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.175798 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b7b7775-4hcgr"] Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.258625 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.323650 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de0f8559-03e7-4b11-92ab-5063d5b32880-serving-cert\") pod \"de0f8559-03e7-4b11-92ab-5063d5b32880\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.325392 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0f8559-03e7-4b11-92ab-5063d5b32880-config" (OuterVolumeSpecName: "config") pod "de0f8559-03e7-4b11-92ab-5063d5b32880" (UID: "de0f8559-03e7-4b11-92ab-5063d5b32880"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.324189 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0f8559-03e7-4b11-92ab-5063d5b32880-config\") pod \"de0f8559-03e7-4b11-92ab-5063d5b32880\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.326319 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de0f8559-03e7-4b11-92ab-5063d5b32880-client-ca\") pod \"de0f8559-03e7-4b11-92ab-5063d5b32880\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.326430 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hq5rd\" (UniqueName: \"kubernetes.io/projected/de0f8559-03e7-4b11-92ab-5063d5b32880-kube-api-access-hq5rd\") pod \"de0f8559-03e7-4b11-92ab-5063d5b32880\" (UID: \"de0f8559-03e7-4b11-92ab-5063d5b32880\") " Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.326678 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de0f8559-03e7-4b11-92ab-5063d5b32880-client-ca" (OuterVolumeSpecName: "client-ca") pod "de0f8559-03e7-4b11-92ab-5063d5b32880" (UID: "de0f8559-03e7-4b11-92ab-5063d5b32880"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.329213 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de0f8559-03e7-4b11-92ab-5063d5b32880-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "de0f8559-03e7-4b11-92ab-5063d5b32880" (UID: "de0f8559-03e7-4b11-92ab-5063d5b32880"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.330473 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de0f8559-03e7-4b11-92ab-5063d5b32880-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.330514 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de0f8559-03e7-4b11-92ab-5063d5b32880-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.330527 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/de0f8559-03e7-4b11-92ab-5063d5b32880-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.331892 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de0f8559-03e7-4b11-92ab-5063d5b32880-kube-api-access-hq5rd" (OuterVolumeSpecName: "kube-api-access-hq5rd") pod "de0f8559-03e7-4b11-92ab-5063d5b32880" (UID: "de0f8559-03e7-4b11-92ab-5063d5b32880"). InnerVolumeSpecName "kube-api-access-hq5rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.335746 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k"] Jan 29 12:08:32 crc kubenswrapper[4840]: W0129 12:08:32.341558 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e647bfe_fa82_470d_9e53_4a0046d446d7.slice/crio-2d179248275d53df2b35a4c7b59b6ca466bb5aae139be2fd1d51c65aec584c3f WatchSource:0}: Error finding container 2d179248275d53df2b35a4c7b59b6ca466bb5aae139be2fd1d51c65aec584c3f: Status 404 returned error can't find the container with id 2d179248275d53df2b35a4c7b59b6ca466bb5aae139be2fd1d51c65aec584c3f Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.432100 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hq5rd\" (UniqueName: \"kubernetes.io/projected/de0f8559-03e7-4b11-92ab-5063d5b32880-kube-api-access-hq5rd\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.837514 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.838479 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb" event={"ID":"de0f8559-03e7-4b11-92ab-5063d5b32880","Type":"ContainerDied","Data":"8253efe0d1e2530e2aa003822167c780ae42e754a044b725c6b5a9d6a0d21f24"} Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.838579 4840 scope.go:117] "RemoveContainer" containerID="a863a5eb3314013831b315f24fa3be90414ae3c7f180d853a60bf65a2b48a5b3" Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.847224 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" event={"ID":"0e647bfe-fa82-470d-9e53-4a0046d446d7","Type":"ContainerStarted","Data":"2d179248275d53df2b35a4c7b59b6ca466bb5aae139be2fd1d51c65aec584c3f"} Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.874245 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb"] Jan 29 12:08:32 crc kubenswrapper[4840]: I0129 12:08:32.877680 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-588b4d9577-ckjgb"] Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.017659 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfad4cd5-dce4-41ba-ae97-25eb36e42958" path="/var/lib/kubelet/pods/cfad4cd5-dce4-41ba-ae97-25eb36e42958/volumes" Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.018737 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de0f8559-03e7-4b11-92ab-5063d5b32880" path="/var/lib/kubelet/pods/de0f8559-03e7-4b11-92ab-5063d5b32880/volumes" Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.977993 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc"] Jan 29 12:08:33 crc kubenswrapper[4840]: E0129 12:08:33.978535 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de0f8559-03e7-4b11-92ab-5063d5b32880" containerName="route-controller-manager" Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.978552 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="de0f8559-03e7-4b11-92ab-5063d5b32880" containerName="route-controller-manager" Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.978655 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="de0f8559-03e7-4b11-92ab-5063d5b32880" containerName="route-controller-manager" Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.979453 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.982738 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.982822 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.983004 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.983038 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.983117 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.983142 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 12:08:33 crc kubenswrapper[4840]: I0129 12:08:33.991343 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc"] Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.069559 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-client-ca\") pod \"route-controller-manager-7bd79cd598-rkwxc\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.069748 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-config\") pod \"route-controller-manager-7bd79cd598-rkwxc\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.069775 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr9xd\" (UniqueName: \"kubernetes.io/projected/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-kube-api-access-dr9xd\") pod \"route-controller-manager-7bd79cd598-rkwxc\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.069803 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-serving-cert\") pod \"route-controller-manager-7bd79cd598-rkwxc\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.171541 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr9xd\" (UniqueName: \"kubernetes.io/projected/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-kube-api-access-dr9xd\") pod \"route-controller-manager-7bd79cd598-rkwxc\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.171608 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-config\") pod \"route-controller-manager-7bd79cd598-rkwxc\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.171642 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-serving-cert\") pod \"route-controller-manager-7bd79cd598-rkwxc\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.171696 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-client-ca\") pod \"route-controller-manager-7bd79cd598-rkwxc\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.173412 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-config\") pod \"route-controller-manager-7bd79cd598-rkwxc\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.174022 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-client-ca\") pod \"route-controller-manager-7bd79cd598-rkwxc\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.181820 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-serving-cert\") pod \"route-controller-manager-7bd79cd598-rkwxc\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.197607 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr9xd\" (UniqueName: \"kubernetes.io/projected/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-kube-api-access-dr9xd\") pod \"route-controller-manager-7bd79cd598-rkwxc\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.297850 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.691393 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc"] Jan 29 12:08:34 crc kubenswrapper[4840]: I0129 12:08:34.861332 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" event={"ID":"4e1d0553-e82a-451d-b551-5a6a5c44f6a1","Type":"ContainerStarted","Data":"b3ead8045a80b61cbe6d0e59dac2937678cceb98833f4024e2808b053a0cec9e"} Jan 29 12:08:35 crc kubenswrapper[4840]: I0129 12:08:35.867054 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" event={"ID":"0e647bfe-fa82-470d-9e53-4a0046d446d7","Type":"ContainerStarted","Data":"f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a"} Jan 29 12:08:36 crc kubenswrapper[4840]: I0129 12:08:36.553387 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:36 crc kubenswrapper[4840]: I0129 12:08:36.561436 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-b77dbf775-q975m" Jan 29 12:08:36 crc kubenswrapper[4840]: I0129 12:08:36.874643 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" event={"ID":"4e1d0553-e82a-451d-b551-5a6a5c44f6a1","Type":"ContainerStarted","Data":"a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e"} Jan 29 12:08:36 crc kubenswrapper[4840]: I0129 12:08:36.875289 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:36 crc kubenswrapper[4840]: I0129 12:08:36.882112 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:08:36 crc kubenswrapper[4840]: I0129 12:08:36.896453 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" podStartSLOduration=10.896428024 podStartE2EDuration="10.896428024s" podCreationTimestamp="2026-01-29 12:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:36.895497008 +0000 UTC m=+248.558476911" watchObservedRunningTime="2026-01-29 12:08:36.896428024 +0000 UTC m=+248.559407917" Jan 29 12:08:37 crc kubenswrapper[4840]: I0129 12:08:37.880862 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:37 crc kubenswrapper[4840]: I0129 12:08:37.886900 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:08:37 crc kubenswrapper[4840]: I0129 12:08:37.902398 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" podStartSLOduration=11.902378291 podStartE2EDuration="11.902378291s" podCreationTimestamp="2026-01-29 12:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:08:37.899217011 +0000 UTC m=+249.562196914" watchObservedRunningTime="2026-01-29 12:08:37.902378291 +0000 UTC m=+249.565358184" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.993222 4840 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.993864 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d" gracePeriod=15 Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.993974 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478" gracePeriod=15 Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.993998 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd" gracePeriod=15 Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.993920 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd" gracePeriod=15 Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.994065 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee" gracePeriod=15 Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.998418 4840 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 12:08:38 crc kubenswrapper[4840]: E0129 12:08:38.998682 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.998699 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 12:08:38 crc kubenswrapper[4840]: E0129 12:08:38.998713 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.998723 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 12:08:38 crc kubenswrapper[4840]: E0129 12:08:38.998737 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.998743 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 29 12:08:38 crc kubenswrapper[4840]: E0129 12:08:38.998751 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.998758 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 12:08:38 crc kubenswrapper[4840]: E0129 12:08:38.998771 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.998778 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 12:08:38 crc kubenswrapper[4840]: E0129 12:08:38.998786 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.998803 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.998906 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.998920 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.998928 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.998937 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.998963 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 29 12:08:38 crc kubenswrapper[4840]: E0129 12:08:38.999079 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.999086 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 12:08:38 crc kubenswrapper[4840]: I0129 12:08:38.999195 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.000464 4840 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.001091 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.006842 4840 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.059342 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.152821 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.152911 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.152970 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.153017 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.153042 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.153068 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.153090 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.153248 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.255757 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.255871 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.255909 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.255901 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.255958 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.255980 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.255991 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.256015 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.256034 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.256074 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.256164 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.256235 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.256250 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.256273 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.256310 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.256341 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.355152 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:08:39 crc kubenswrapper[4840]: E0129 12:08:39.581332 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:08:39Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:08:39Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:08:39Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:08:39Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:06acdd148ddfe14125d9ab253b9eb0dca1930047787f5b277a21bc88cdfd5030\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a649014abb6de45bd5e9eba64d76cf536ed766c876c58c0e1388115bafecf763\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185399018},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:39 crc kubenswrapper[4840]: E0129 12:08:39.582262 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:39 crc kubenswrapper[4840]: E0129 12:08:39.582773 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:39 crc kubenswrapper[4840]: E0129 12:08:39.583202 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:39 crc kubenswrapper[4840]: E0129 12:08:39.583715 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:39 crc kubenswrapper[4840]: E0129 12:08:39.583756 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.894328 4840 generic.go:334] "Generic (PLEG): container finished" podID="f3262084-4755-4d63-b056-ea1a16cbd8e4" containerID="5d7396de3c7888e4a56fe6ca40e57ab8e9c7fe1bfafac4878e56f516b941c939" exitCode=0 Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.894405 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f3262084-4755-4d63-b056-ea1a16cbd8e4","Type":"ContainerDied","Data":"5d7396de3c7888e4a56fe6ca40e57ab8e9c7fe1bfafac4878e56f516b941c939"} Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.895134 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.895298 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.898062 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.900095 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.901131 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd" exitCode=0 Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.901178 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee" exitCode=0 Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.901194 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478" exitCode=0 Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.901204 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd" exitCode=2 Jan 29 12:08:39 crc kubenswrapper[4840]: I0129 12:08:39.901215 4840 scope.go:117] "RemoveContainer" containerID="61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447" Jan 29 12:08:41 crc kubenswrapper[4840]: E0129 12:08:41.348168 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d.scope\": RecentStats: unable to find data in memory cache]" Jan 29 12:08:41 crc kubenswrapper[4840]: I0129 12:08:41.920553 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 12:08:41 crc kubenswrapper[4840]: I0129 12:08:41.922466 4840 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d" exitCode=0 Jan 29 12:08:49 crc kubenswrapper[4840]: I0129 12:08:49.004551 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:49 crc kubenswrapper[4840]: I0129 12:08:49.005335 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.259225 4840 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.259768 4840 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.260072 4840 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.260405 4840 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.260861 4840 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:49 crc kubenswrapper[4840]: I0129 12:08:49.260933 4840 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.261523 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="200ms" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.462365 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="400ms" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.826683 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:08:49Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:08:49Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:08:49Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:08:49Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:06acdd148ddfe14125d9ab253b9eb0dca1930047787f5b277a21bc88cdfd5030\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a649014abb6de45bd5e9eba64d76cf536ed766c876c58c0e1388115bafecf763\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185399018},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.827476 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.827783 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.828060 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.828362 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.828391 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:08:49 crc kubenswrapper[4840]: E0129 12:08:49.863616 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="800ms" Jan 29 12:08:50 crc kubenswrapper[4840]: E0129 12:08:50.664937 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="1.6s" Jan 29 12:08:51 crc kubenswrapper[4840]: E0129 12:08:51.654091 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-486ss.188f324069359947\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-486ss.188f324069359947 openshift-marketplace 29570 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-486ss,UID:3f137467-8040-45d5-bfa1-89860498eb85,APIVersion:v1,ResourceVersion:28497,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/community-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 12:07:10 +0000 UTC,LastTimestamp:2026-01-29 12:08:51.65325733 +0000 UTC m=+263.316237233,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 12:08:52 crc kubenswrapper[4840]: E0129 12:08:52.266358 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="3.2s" Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.703468 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.704024 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.704319 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.861270 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3262084-4755-4d63-b056-ea1a16cbd8e4-kube-api-access\") pod \"f3262084-4755-4d63-b056-ea1a16cbd8e4\" (UID: \"f3262084-4755-4d63-b056-ea1a16cbd8e4\") " Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.861372 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3262084-4755-4d63-b056-ea1a16cbd8e4-kubelet-dir\") pod \"f3262084-4755-4d63-b056-ea1a16cbd8e4\" (UID: \"f3262084-4755-4d63-b056-ea1a16cbd8e4\") " Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.861424 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f3262084-4755-4d63-b056-ea1a16cbd8e4-var-lock\") pod \"f3262084-4755-4d63-b056-ea1a16cbd8e4\" (UID: \"f3262084-4755-4d63-b056-ea1a16cbd8e4\") " Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.861583 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3262084-4755-4d63-b056-ea1a16cbd8e4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f3262084-4755-4d63-b056-ea1a16cbd8e4" (UID: "f3262084-4755-4d63-b056-ea1a16cbd8e4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.861672 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3262084-4755-4d63-b056-ea1a16cbd8e4-var-lock" (OuterVolumeSpecName: "var-lock") pod "f3262084-4755-4d63-b056-ea1a16cbd8e4" (UID: "f3262084-4755-4d63-b056-ea1a16cbd8e4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.861871 4840 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3262084-4755-4d63-b056-ea1a16cbd8e4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.861893 4840 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f3262084-4755-4d63-b056-ea1a16cbd8e4-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.866901 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3262084-4755-4d63-b056-ea1a16cbd8e4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f3262084-4755-4d63-b056-ea1a16cbd8e4" (UID: "f3262084-4755-4d63-b056-ea1a16cbd8e4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.963592 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3262084-4755-4d63-b056-ea1a16cbd8e4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.985409 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"f3262084-4755-4d63-b056-ea1a16cbd8e4","Type":"ContainerDied","Data":"4b9c3151e506792173ea3675dd4b7fd08097f0a9ff40e3cdd1cf89b4a26b9e80"} Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.985461 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b9c3151e506792173ea3675dd4b7fd08097f0a9ff40e3cdd1cf89b4a26b9e80" Jan 29 12:08:52 crc kubenswrapper[4840]: I0129 12:08:52.985496 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 29 12:08:53 crc kubenswrapper[4840]: I0129 12:08:53.008525 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:53 crc kubenswrapper[4840]: I0129 12:08:53.009395 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:53 crc kubenswrapper[4840]: I0129 12:08:53.995720 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 12:08:53 crc kubenswrapper[4840]: I0129 12:08:53.995777 4840 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba" exitCode=1 Jan 29 12:08:53 crc kubenswrapper[4840]: I0129 12:08:53.995808 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba"} Jan 29 12:08:53 crc kubenswrapper[4840]: I0129 12:08:53.996399 4840 scope.go:117] "RemoveContainer" containerID="6ae75e3b5463825f2dd48d9f7f6539b69a10793f0038541e631b0a05daf59aba" Jan 29 12:08:53 crc kubenswrapper[4840]: I0129 12:08:53.996857 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:53 crc kubenswrapper[4840]: I0129 12:08:53.997689 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:53 crc kubenswrapper[4840]: I0129 12:08:53.998110 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:54 crc kubenswrapper[4840]: I0129 12:08:54.013892 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:08:54 crc kubenswrapper[4840]: I0129 12:08:54.148079 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:08:54 crc kubenswrapper[4840]: I0129 12:08:54.714639 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:08:55 crc kubenswrapper[4840]: E0129 12:08:55.468340 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="6.4s" Jan 29 12:08:55 crc kubenswrapper[4840]: E0129 12:08:55.476644 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447\": container with ID starting with 61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447 not found: ID does not exist" containerID="61d45d1a65068ea381bf4b3317850ff6ccaa35e9bd1e9b83da1ef509c98ad447" Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.478411 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.479305 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.480011 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.480450 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.480757 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.481095 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:55 crc kubenswrapper[4840]: W0129 12:08:55.541669 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-69e0e5dac83767f8c432a00d2db49420e1116f45c2ead2f29caff850e0bd2935 WatchSource:0}: Error finding container 69e0e5dac83767f8c432a00d2db49420e1116f45c2ead2f29caff850e0bd2935: Status 404 returned error can't find the container with id 69e0e5dac83767f8c432a00d2db49420e1116f45c2ead2f29caff850e0bd2935 Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.597401 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.597561 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.597624 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.597667 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.597721 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.597739 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.598056 4840 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.598086 4840 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:55 crc kubenswrapper[4840]: I0129 12:08:55.598095 4840 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.010341 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6w65" event={"ID":"12410648-0772-40f7-9261-107634802711","Type":"ContainerStarted","Data":"8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680"} Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.011484 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.011938 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.012219 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.012400 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqht4" event={"ID":"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e","Type":"ContainerStarted","Data":"2b1654be74720991118ecfd057e255c85d95ef2b91110007c21c65b0e370c790"} Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.012486 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.012796 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.013153 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.013472 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.013898 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.014318 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.014446 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj2gn" event={"ID":"dd2c47ef-59b6-409e-8091-2e6f20eb89cb","Type":"ContainerStarted","Data":"a7e5acb081ba59a7630dd4c3ec7c15eb13f3886b1a3ef85bfe691b9b296c5494"} Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.014563 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.014801 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.015117 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.015380 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.015642 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.015907 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.016225 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.016587 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.017012 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.017181 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd7t2" event={"ID":"93c0bb8f-812d-4bf4-93c4-7ccd557d326d","Type":"ContainerStarted","Data":"5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011"} Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.017675 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.018099 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.018415 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.018704 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.019006 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.019251 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.019460 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.019753 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.021563 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.023074 4840 scope.go:117] "RemoveContainer" containerID="df52db5c907f9b029c8587d505fa21086854e1faeebe884f9c777baa3ea74cdd" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.023109 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.031103 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.031402 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ca73efb7a4b7724a7b0d9fe7757adff4e4d49e89d98c810abb3dbd37eb4af312"} Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.032105 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.032679 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.034132 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.034650 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.035073 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.035483 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.035753 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.036061 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.040037 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.040560 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.041170 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.041417 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.041646 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.041855 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.042122 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.042339 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.043629 4840 generic.go:334] "Generic (PLEG): container finished" podID="fcb94932-ec4e-48f5-bda1-51aa89a53087" containerID="d7b23facd586db9e61559a528705c88c0dad8d70fafc2bc87ad2615968156242" exitCode=0 Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.043718 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgg49" event={"ID":"fcb94932-ec4e-48f5-bda1-51aa89a53087","Type":"ContainerDied","Data":"d7b23facd586db9e61559a528705c88c0dad8d70fafc2bc87ad2615968156242"} Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.044404 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.045210 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.045554 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.046747 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.047215 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.047501 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.047675 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"71e745dc2251868198fe759a0a9d2f3190a3da5c9ddd444a329f2774460401f5"} Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.047712 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.047726 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"69e0e5dac83767f8c432a00d2db49420e1116f45c2ead2f29caff850e0bd2935"} Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.048156 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.048620 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.049021 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.049486 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.049995 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.050411 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.050699 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.050923 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.051226 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.051576 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.052077 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.052675 4840 scope.go:117] "RemoveContainer" containerID="3eef449beee96f78a9b32ce3f0a870055327a351c73b05d5b5831e3712ca6aee" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.054095 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxg7" event={"ID":"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4","Type":"ContainerStarted","Data":"cdefe3951d5da6e4329c47fa3a7d015de8030f0dc7e42cd6a1051e49e4d77646"} Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.054829 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.055087 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.055399 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.055725 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.055979 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.056160 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.056347 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.056554 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.056830 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.059321 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.062529 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2k7s" event={"ID":"f8720503-5456-4934-a5a9-d58d6eeeb0a6","Type":"ContainerStarted","Data":"e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59"} Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.063335 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.066053 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.066456 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.067378 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486ss" event={"ID":"3f137467-8040-45d5-bfa1-89860498eb85","Type":"ContainerStarted","Data":"f492370aaa3bf184513076529e03dd7f0b1ae2e016c7888abb731e53cc690809"} Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.067538 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.067769 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.067962 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.068156 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.068474 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.068774 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.068966 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.069151 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.069410 4840 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.069563 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.069705 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.069887 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.070261 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.076834 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.077140 4840 scope.go:117] "RemoveContainer" containerID="e3c2fdd19cf145580ecb570032233ba22bdee371954a2442368ba73db22c8478" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.077371 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.077714 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.078086 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.079241 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.080367 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.080709 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.094850 4840 scope.go:117] "RemoveContainer" containerID="af16e33b89d0a61563a84a985a1ce8dd838cd0b240f0e2ddfc8a372c4fa476cd" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.107883 4840 scope.go:117] "RemoveContainer" containerID="505d96959d47fcdac3763adf0474639ffd5f221fcf6587b69e4280b4c83fa61d" Jan 29 12:08:56 crc kubenswrapper[4840]: I0129 12:08:56.125965 4840 scope.go:117] "RemoveContainer" containerID="9212df66f3db4829a9993afd08eb20c9157da14bbf546768c6546b650f120a92" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.040418 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.077238 4840 generic.go:334] "Generic (PLEG): container finished" podID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" containerID="cdefe3951d5da6e4329c47fa3a7d015de8030f0dc7e42cd6a1051e49e4d77646" exitCode=0 Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.077322 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxg7" event={"ID":"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4","Type":"ContainerDied","Data":"cdefe3951d5da6e4329c47fa3a7d015de8030f0dc7e42cd6a1051e49e4d77646"} Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.078131 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.078409 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.078655 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.078870 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.079104 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.079607 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.079686 4840 generic.go:334] "Generic (PLEG): container finished" podID="3f137467-8040-45d5-bfa1-89860498eb85" containerID="f492370aaa3bf184513076529e03dd7f0b1ae2e016c7888abb731e53cc690809" exitCode=0 Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.079753 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486ss" event={"ID":"3f137467-8040-45d5-bfa1-89860498eb85","Type":"ContainerDied","Data":"f492370aaa3bf184513076529e03dd7f0b1ae2e016c7888abb731e53cc690809"} Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.081517 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.083359 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.083735 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.084085 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.084276 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.084552 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.084698 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.084909 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.085166 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.085415 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.085620 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.085819 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.086118 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.086395 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.086617 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.086798 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.090994 4840 generic.go:334] "Generic (PLEG): container finished" podID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" containerID="2b1654be74720991118ecfd057e255c85d95ef2b91110007c21c65b0e370c790" exitCode=0 Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.091075 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqht4" event={"ID":"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e","Type":"ContainerDied","Data":"2b1654be74720991118ecfd057e255c85d95ef2b91110007c21c65b0e370c790"} Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.091746 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.092195 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.092908 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.093189 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.093475 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.093721 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.094086 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.094281 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.094442 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.094601 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.094768 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.100394 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgg49" event={"ID":"fcb94932-ec4e-48f5-bda1-51aa89a53087","Type":"ContainerStarted","Data":"47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288"} Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.101537 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.101898 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.103275 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.104065 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.104477 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.104797 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.105107 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.105336 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.105548 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.105763 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.106094 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.111779 4840 generic.go:334] "Generic (PLEG): container finished" podID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerID="a7e5acb081ba59a7630dd4c3ec7c15eb13f3886b1a3ef85bfe691b9b296c5494" exitCode=0 Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.112657 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj2gn" event={"ID":"dd2c47ef-59b6-409e-8091-2e6f20eb89cb","Type":"ContainerDied","Data":"a7e5acb081ba59a7630dd4c3ec7c15eb13f3886b1a3ef85bfe691b9b296c5494"} Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.113679 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.114386 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.114848 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.115213 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.115487 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.115714 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.115921 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.116179 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.116501 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.116965 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: I0129 12:08:57.117329 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:57 crc kubenswrapper[4840]: E0129 12:08:57.295362 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-486ss.188f324069359947\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-486ss.188f324069359947 openshift-marketplace 29570 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-486ss,UID:3f137467-8040-45d5-bfa1-89860498eb85,APIVersion:v1,ResourceVersion:28497,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/community-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 12:07:10 +0000 UTC,LastTimestamp:2026-01-29 12:08:51.65325733 +0000 UTC m=+263.316237233,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.118422 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486ss" event={"ID":"3f137467-8040-45d5-bfa1-89860498eb85","Type":"ContainerStarted","Data":"a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8"} Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.119814 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.120014 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.120194 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.120505 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.120918 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.121149 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.121410 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.121704 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.122007 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.122081 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqht4" event={"ID":"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e","Type":"ContainerStarted","Data":"8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e"} Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.122220 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.122420 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.122786 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.123001 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.123160 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.123323 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.123562 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.123865 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.124077 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.124302 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.124610 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.124871 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.125090 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.125285 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj2gn" event={"ID":"dd2c47ef-59b6-409e-8091-2e6f20eb89cb","Type":"ContainerStarted","Data":"64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907"} Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.125863 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.126142 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.126417 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.126615 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.126775 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.126928 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.127174 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.127393 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.127646 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.127934 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.128183 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.128725 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxg7" event={"ID":"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4","Type":"ContainerStarted","Data":"9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273"} Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.129542 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.129725 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.130059 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.130437 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.130663 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.130822 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.131004 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.131309 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.131697 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.131994 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:58 crc kubenswrapper[4840]: I0129 12:08:58.132257 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.003719 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.004116 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.004476 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.004741 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.004996 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.005255 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.005636 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.005901 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.006143 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.006309 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.006459 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.615108 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-486ss" Jan 29 12:08:59 crc kubenswrapper[4840]: I0129 12:08:59.615435 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-486ss" Jan 29 12:09:00 crc kubenswrapper[4840]: E0129 12:09:00.011536 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:09:00Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:09:00Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:09:00Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:09:00Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:06acdd148ddfe14125d9ab253b9eb0dca1930047787f5b277a21bc88cdfd5030\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a649014abb6de45bd5e9eba64d76cf536ed766c876c58c0e1388115bafecf763\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185399018},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: E0129 12:09:00.012440 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: E0129 12:09:00.012711 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: E0129 12:09:00.012998 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: E0129 12:09:00.013209 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: E0129 12:09:00.013237 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.096554 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.096619 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.168992 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.169739 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.170393 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.172100 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.172322 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.172510 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.172701 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.172897 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.173087 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.173234 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.173373 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.173554 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.223546 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.224196 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.224736 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.225450 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.225742 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.226036 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.226345 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.226573 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.226830 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.227091 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.227350 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.227651 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.307989 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.308029 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.349851 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.350335 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.350734 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.350919 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.351098 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.351245 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.351387 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.351525 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.351663 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.351808 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.351990 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.352204 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.520153 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.520214 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.569702 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.570251 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.570601 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.570995 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.571275 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.571516 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.571762 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.572066 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.572358 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.572619 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.572855 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.573112 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:00 crc kubenswrapper[4840]: I0129 12:09:00.819405 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-486ss" podUID="3f137467-8040-45d5-bfa1-89860498eb85" containerName="registry-server" probeResult="failure" output=< Jan 29 12:09:00 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Jan 29 12:09:00 crc kubenswrapper[4840]: > Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.188923 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.189592 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.189882 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.190167 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.190546 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.190895 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.191522 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.191856 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.192192 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.192588 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.192928 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.193220 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: E0129 12:09:01.869183 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="7s" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.873451 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.873491 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.916240 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.916825 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.917354 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.917714 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.918164 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.918448 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.918780 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.919090 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.919370 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.919643 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.919903 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:01 crc kubenswrapper[4840]: I0129 12:09:01.920190 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.198684 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.199497 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.200428 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.200966 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.201388 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.201694 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.202027 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.202400 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.202648 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.202962 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.203352 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.203658 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.293696 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.294093 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.341325 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.342176 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.342966 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.343510 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.343812 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.344200 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.344467 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.344664 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.344896 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.345188 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.345462 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.345709 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.821804 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:09:02 crc kubenswrapper[4840]: I0129 12:09:02.821889 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.188472 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.188554 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.205113 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.206013 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.206545 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.207142 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.207509 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.207791 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.208067 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.208350 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.208648 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.208932 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.209260 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.209542 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:03 crc kubenswrapper[4840]: I0129 12:09:03.862887 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jj2gn" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerName="registry-server" probeResult="failure" output=< Jan 29 12:09:03 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Jan 29 12:09:03 crc kubenswrapper[4840]: > Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.013893 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.017825 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.018459 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.019172 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.019613 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.019961 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.020208 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.020522 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.020841 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.021164 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.021500 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.021770 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.022026 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.166115 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:09:04 crc kubenswrapper[4840]: I0129 12:09:04.238729 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pqht4" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" containerName="registry-server" probeResult="failure" output=< Jan 29 12:09:04 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Jan 29 12:09:04 crc kubenswrapper[4840]: > Jan 29 12:09:05 crc kubenswrapper[4840]: I0129 12:09:05.175982 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 29 12:09:05 crc kubenswrapper[4840]: I0129 12:09:05.176797 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:05 crc kubenswrapper[4840]: I0129 12:09:05.177416 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:05 crc kubenswrapper[4840]: I0129 12:09:05.178087 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:05 crc kubenswrapper[4840]: I0129 12:09:05.178455 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:05 crc kubenswrapper[4840]: I0129 12:09:05.178782 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:05 crc kubenswrapper[4840]: I0129 12:09:05.179221 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:05 crc kubenswrapper[4840]: I0129 12:09:05.179585 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:05 crc kubenswrapper[4840]: I0129 12:09:05.179906 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:05 crc kubenswrapper[4840]: I0129 12:09:05.180321 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:05 crc kubenswrapper[4840]: I0129 12:09:05.180591 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:05 crc kubenswrapper[4840]: I0129 12:09:05.181002 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:07 crc kubenswrapper[4840]: E0129 12:09:07.296768 4840 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events/community-operators-486ss.188f324069359947\": dial tcp 38.102.83.200:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-486ss.188f324069359947 openshift-marketplace 29570 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-486ss,UID:3f137467-8040-45d5-bfa1-89860498eb85,APIVersion:v1,ResourceVersion:28497,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/community-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-29 12:07:10 +0000 UTC,LastTimestamp:2026-01-29 12:08:51.65325733 +0000 UTC m=+263.316237233,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.001290 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.002535 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.003083 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.003404 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.003772 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.004122 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.004446 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.004789 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.005215 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.005596 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.005977 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.006324 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.017500 4840 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0ad1298-8886-4cc3-892d-7574685f0c3c" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.017539 4840 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0ad1298-8886-4cc3-892d-7574685f0c3c" Jan 29 12:09:08 crc kubenswrapper[4840]: E0129 12:09:08.018025 4840 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.018623 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:09:08 crc kubenswrapper[4840]: W0129 12:09:08.042232 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ee7874aeed64438fecfdcc9f83473881df7986bf47ad4b292b5088dfbc968327 WatchSource:0}: Error finding container ee7874aeed64438fecfdcc9f83473881df7986bf47ad4b292b5088dfbc968327: Status 404 returned error can't find the container with id ee7874aeed64438fecfdcc9f83473881df7986bf47ad4b292b5088dfbc968327 Jan 29 12:09:08 crc kubenswrapper[4840]: I0129 12:09:08.186455 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ee7874aeed64438fecfdcc9f83473881df7986bf47ad4b292b5088dfbc968327"} Jan 29 12:09:08 crc kubenswrapper[4840]: E0129 12:09:08.870195 4840 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" interval="7s" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.010837 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.011265 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.011799 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.012204 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.012490 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.012799 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.013229 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.013758 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.014124 4840 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.014691 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.015065 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.015355 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.194913 4840 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a7e52a312235438cbd5e696eeaf9fb738df54ade1fcf8906ea5c20db22440987" exitCode=0 Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.194976 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a7e52a312235438cbd5e696eeaf9fb738df54ade1fcf8906ea5c20db22440987"} Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.195326 4840 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0ad1298-8886-4cc3-892d-7574685f0c3c" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.195367 4840 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0ad1298-8886-4cc3-892d-7574685f0c3c" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.195895 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: E0129 12:09:09.196001 4840 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.196624 4840 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.197044 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.197239 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.197445 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.197644 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.197849 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.198107 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.198569 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.199038 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.200394 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.200734 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.679437 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-486ss" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.680450 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.681062 4840 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.681442 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.682073 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.682443 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.682851 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.683209 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.683546 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.683864 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.684146 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.684390 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.684634 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.722558 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-486ss" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.725451 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.726292 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.726825 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.727212 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.727678 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.728218 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.728566 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.729088 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.729474 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.729891 4840 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.730324 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:09 crc kubenswrapper[4840]: I0129 12:09:09.730850 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: E0129 12:09:10.141243 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:09:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:09:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:09:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-29T12:09:10Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:15db2d5dee506f58d0ee5bf1684107211c0473c43ef6111e13df0c55850f77c9\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:acd62b9cbbc1168a7c81182ba747850ea67c24294a6703fb341471191da484f8\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1676237031},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:40a0af9b58137c413272f3533763f7affd5db97e6ef410a6aeabce6d81a246ee\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7e9b6f6bdbfa69f6106bc85eaee51d908ede4be851b578362af443af6bf732a8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202031349},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:06acdd148ddfe14125d9ab253b9eb0dca1930047787f5b277a21bc88cdfd5030\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a649014abb6de45bd5e9eba64d76cf536ed766c876c58c0e1388115bafecf763\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1185399018},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: E0129 12:09:10.141862 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: E0129 12:09:10.142171 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: E0129 12:09:10.142423 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: E0129 12:09:10.142845 4840 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: E0129 12:09:10.142859 4840 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.352809 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.353497 4840 status_manager.go:851] "Failed to get status for pod" podUID="3f137467-8040-45d5-bfa1-89860498eb85" pod="openshift-marketplace/community-operators-486ss" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-486ss\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.353792 4840 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.354078 4840 status_manager.go:851] "Failed to get status for pod" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" pod="openshift-marketplace/certified-operators-sd7t2" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sd7t2\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.354282 4840 status_manager.go:851] "Failed to get status for pod" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" pod="openshift-marketplace/redhat-marketplace-vgg49" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vgg49\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.354473 4840 status_manager.go:851] "Failed to get status for pod" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.354833 4840 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.356080 4840 status_manager.go:851] "Failed to get status for pod" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" pod="openshift-marketplace/redhat-operators-pqht4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pqht4\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.356674 4840 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.356874 4840 status_manager.go:851] "Failed to get status for pod" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" pod="openshift-marketplace/redhat-operators-jj2gn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-jj2gn\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.357056 4840 status_manager.go:851] "Failed to get status for pod" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" pod="openshift-marketplace/certified-operators-c2k7s" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-c2k7s\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.357210 4840 status_manager.go:851] "Failed to get status for pod" podUID="12410648-0772-40f7-9261-107634802711" pod="openshift-marketplace/redhat-marketplace-r6w65" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-r6w65\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:10 crc kubenswrapper[4840]: I0129 12:09:10.357364 4840 status_manager.go:851] "Failed to get status for pod" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" pod="openshift-marketplace/community-operators-tnxg7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tnxg7\": dial tcp 38.102.83.200:6443: connect: connection refused" Jan 29 12:09:12 crc kubenswrapper[4840]: I0129 12:09:11.213526 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"67cba89b97ba8ab796032681bc375bfc3b720ca05816fece110fbac0471f19d1"} Jan 29 12:09:12 crc kubenswrapper[4840]: I0129 12:09:11.213925 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1f66793f3c5f590e056e2bbba51e0124d387e2f900808b9e929798be2adfe4f2"} Jan 29 12:09:12 crc kubenswrapper[4840]: I0129 12:09:12.248381 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1161c9d45d752c5e2c781939e850f3b9063bad3cb00905394ff504fd4bb62bf0"} Jan 29 12:09:12 crc kubenswrapper[4840]: I0129 12:09:12.902224 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:09:12 crc kubenswrapper[4840]: I0129 12:09:12.953102 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:09:13 crc kubenswrapper[4840]: I0129 12:09:13.240723 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:09:13 crc kubenswrapper[4840]: I0129 12:09:13.258705 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"a2a59dd742f112e2714092a59bdcb58c55649664cb20dc23701ba92f3ed7927c"} Jan 29 12:09:13 crc kubenswrapper[4840]: I0129 12:09:13.289923 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:09:14 crc kubenswrapper[4840]: I0129 12:09:14.293013 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"0510e3211c4c041dca378c4507796302ef6f9270a0f51067889b4e4118b22f58"} Jan 29 12:09:14 crc kubenswrapper[4840]: I0129 12:09:14.293503 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:09:14 crc kubenswrapper[4840]: I0129 12:09:14.293571 4840 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0ad1298-8886-4cc3-892d-7574685f0c3c" Jan 29 12:09:14 crc kubenswrapper[4840]: I0129 12:09:14.293602 4840 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0ad1298-8886-4cc3-892d-7574685f0c3c" Jan 29 12:09:14 crc kubenswrapper[4840]: I0129 12:09:14.304285 4840 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:09:15 crc kubenswrapper[4840]: I0129 12:09:15.298538 4840 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0ad1298-8886-4cc3-892d-7574685f0c3c" Jan 29 12:09:15 crc kubenswrapper[4840]: I0129 12:09:15.298570 4840 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="a0ad1298-8886-4cc3-892d-7574685f0c3c" Jan 29 12:09:18 crc kubenswrapper[4840]: I0129 12:09:18.016401 4840 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="ad65aabc-a14c-44ed-a5fc-ee526a2212da" Jan 29 12:09:20 crc kubenswrapper[4840]: I0129 12:09:20.325893 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 29 12:09:20 crc kubenswrapper[4840]: I0129 12:09:20.327111 4840 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f" exitCode=1 Jan 29 12:09:20 crc kubenswrapper[4840]: I0129 12:09:20.327158 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f"} Jan 29 12:09:20 crc kubenswrapper[4840]: I0129 12:09:20.327738 4840 scope.go:117] "RemoveContainer" containerID="efc36b858cb1d0a4308c8c51fe428ebe5bb7b89be15ea3ef382405a40a8a495f" Jan 29 12:09:21 crc kubenswrapper[4840]: I0129 12:09:21.338913 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 29 12:09:21 crc kubenswrapper[4840]: I0129 12:09:21.344824 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e30f8afea894327d0484deacf78304865e322e74d5fb5348d0d02387f2dd2da3"} Jan 29 12:09:28 crc kubenswrapper[4840]: I0129 12:09:28.816729 4840 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 29 12:09:40 crc kubenswrapper[4840]: I0129 12:09:40.458893 4840 generic.go:334] "Generic (PLEG): container finished" podID="660d278f-da23-4033-82de-88c42ef375ed" containerID="60f1e40925d3f4888e5771354e96aa15c213200c60df268c75c8977a182cfad0" exitCode=0 Jan 29 12:09:40 crc kubenswrapper[4840]: I0129 12:09:40.459145 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" event={"ID":"660d278f-da23-4033-82de-88c42ef375ed","Type":"ContainerDied","Data":"60f1e40925d3f4888e5771354e96aa15c213200c60df268c75c8977a182cfad0"} Jan 29 12:09:40 crc kubenswrapper[4840]: I0129 12:09:40.460297 4840 scope.go:117] "RemoveContainer" containerID="60f1e40925d3f4888e5771354e96aa15c213200c60df268c75c8977a182cfad0" Jan 29 12:09:40 crc kubenswrapper[4840]: I0129 12:09:40.529825 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 29 12:09:40 crc kubenswrapper[4840]: I0129 12:09:40.857043 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:09:40 crc kubenswrapper[4840]: I0129 12:09:40.857382 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:09:41 crc kubenswrapper[4840]: I0129 12:09:41.470366 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cwqw5_660d278f-da23-4033-82de-88c42ef375ed/marketplace-operator/1.log" Jan 29 12:09:41 crc kubenswrapper[4840]: I0129 12:09:41.471140 4840 generic.go:334] "Generic (PLEG): container finished" podID="660d278f-da23-4033-82de-88c42ef375ed" containerID="9d9660588c73f1797b1abc9bbf7d4c8bb5a45ccc64db803a601f7d68153c6c8f" exitCode=1 Jan 29 12:09:41 crc kubenswrapper[4840]: I0129 12:09:41.471184 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" event={"ID":"660d278f-da23-4033-82de-88c42ef375ed","Type":"ContainerDied","Data":"9d9660588c73f1797b1abc9bbf7d4c8bb5a45ccc64db803a601f7d68153c6c8f"} Jan 29 12:09:41 crc kubenswrapper[4840]: I0129 12:09:41.471221 4840 scope.go:117] "RemoveContainer" containerID="60f1e40925d3f4888e5771354e96aa15c213200c60df268c75c8977a182cfad0" Jan 29 12:09:41 crc kubenswrapper[4840]: I0129 12:09:41.472808 4840 scope.go:117] "RemoveContainer" containerID="9d9660588c73f1797b1abc9bbf7d4c8bb5a45ccc64db803a601f7d68153c6c8f" Jan 29 12:09:41 crc kubenswrapper[4840]: E0129 12:09:41.473282 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-cwqw5_openshift-marketplace(660d278f-da23-4033-82de-88c42ef375ed)\"" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" podUID="660d278f-da23-4033-82de-88c42ef375ed" Jan 29 12:09:42 crc kubenswrapper[4840]: I0129 12:09:42.358429 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 29 12:09:42 crc kubenswrapper[4840]: I0129 12:09:42.479907 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cwqw5_660d278f-da23-4033-82de-88c42ef375ed/marketplace-operator/1.log" Jan 29 12:09:42 crc kubenswrapper[4840]: I0129 12:09:42.480633 4840 scope.go:117] "RemoveContainer" containerID="9d9660588c73f1797b1abc9bbf7d4c8bb5a45ccc64db803a601f7d68153c6c8f" Jan 29 12:09:42 crc kubenswrapper[4840]: E0129 12:09:42.480983 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-cwqw5_openshift-marketplace(660d278f-da23-4033-82de-88c42ef375ed)\"" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" podUID="660d278f-da23-4033-82de-88c42ef375ed" Jan 29 12:09:42 crc kubenswrapper[4840]: I0129 12:09:42.725811 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 29 12:09:42 crc kubenswrapper[4840]: I0129 12:09:42.779763 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 29 12:09:43 crc kubenswrapper[4840]: I0129 12:09:43.064936 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 29 12:09:43 crc kubenswrapper[4840]: I0129 12:09:43.418077 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 29 12:09:43 crc kubenswrapper[4840]: I0129 12:09:43.546042 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 29 12:09:43 crc kubenswrapper[4840]: I0129 12:09:43.578534 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 29 12:09:43 crc kubenswrapper[4840]: I0129 12:09:43.581160 4840 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 29 12:09:43 crc kubenswrapper[4840]: I0129 12:09:43.756133 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 29 12:09:43 crc kubenswrapper[4840]: I0129 12:09:43.949982 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 29 12:09:44 crc kubenswrapper[4840]: I0129 12:09:44.032368 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 29 12:09:44 crc kubenswrapper[4840]: I0129 12:09:44.121906 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 29 12:09:44 crc kubenswrapper[4840]: I0129 12:09:44.245912 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 29 12:09:44 crc kubenswrapper[4840]: I0129 12:09:44.289409 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 29 12:09:44 crc kubenswrapper[4840]: I0129 12:09:44.414110 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 29 12:09:45 crc kubenswrapper[4840]: I0129 12:09:45.109745 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 29 12:09:45 crc kubenswrapper[4840]: I0129 12:09:45.148351 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 29 12:09:45 crc kubenswrapper[4840]: I0129 12:09:45.241647 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 29 12:09:45 crc kubenswrapper[4840]: I0129 12:09:45.361826 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 29 12:09:45 crc kubenswrapper[4840]: I0129 12:09:45.526647 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 29 12:09:45 crc kubenswrapper[4840]: I0129 12:09:45.611202 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 29 12:09:45 crc kubenswrapper[4840]: I0129 12:09:45.843500 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 29 12:09:46 crc kubenswrapper[4840]: I0129 12:09:46.208498 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 29 12:09:46 crc kubenswrapper[4840]: I0129 12:09:46.507112 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 29 12:09:46 crc kubenswrapper[4840]: I0129 12:09:46.582086 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 29 12:09:46 crc kubenswrapper[4840]: I0129 12:09:46.734681 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 29 12:09:47 crc kubenswrapper[4840]: I0129 12:09:47.083394 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 29 12:09:47 crc kubenswrapper[4840]: I0129 12:09:47.541784 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 29 12:09:47 crc kubenswrapper[4840]: I0129 12:09:47.613295 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 29 12:09:47 crc kubenswrapper[4840]: I0129 12:09:47.774562 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 29 12:09:47 crc kubenswrapper[4840]: I0129 12:09:47.776877 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 29 12:09:47 crc kubenswrapper[4840]: I0129 12:09:47.782711 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 29 12:09:47 crc kubenswrapper[4840]: I0129 12:09:47.978512 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 29 12:09:48 crc kubenswrapper[4840]: I0129 12:09:48.142480 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 29 12:09:48 crc kubenswrapper[4840]: I0129 12:09:48.242822 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 29 12:09:48 crc kubenswrapper[4840]: I0129 12:09:48.296978 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 29 12:09:48 crc kubenswrapper[4840]: I0129 12:09:48.373641 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 29 12:09:48 crc kubenswrapper[4840]: I0129 12:09:48.428149 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 29 12:09:48 crc kubenswrapper[4840]: I0129 12:09:48.642573 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 29 12:09:48 crc kubenswrapper[4840]: I0129 12:09:48.814228 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.032219 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.660936 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.720309 4840 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.720808 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=70.72079153 podStartE2EDuration="1m10.72079153s" podCreationTimestamp="2026-01-29 12:08:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:17.995287245 +0000 UTC m=+289.658267148" watchObservedRunningTime="2026-01-29 12:09:49.72079153 +0000 UTC m=+321.383771413" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.721232 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c2k7s" podStartSLOduration=56.99780565 podStartE2EDuration="2m40.721226773s" podCreationTimestamp="2026-01-29 12:07:09 +0000 UTC" firstStartedPulling="2026-01-29 12:07:11.816147598 +0000 UTC m=+163.479127491" lastFinishedPulling="2026-01-29 12:08:55.539568721 +0000 UTC m=+267.202548614" observedRunningTime="2026-01-29 12:09:18.050798593 +0000 UTC m=+289.713778486" watchObservedRunningTime="2026-01-29 12:09:49.721226773 +0000 UTC m=+321.384206666" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.721623 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-486ss" podStartSLOduration=53.644494554 podStartE2EDuration="2m40.721615783s" podCreationTimestamp="2026-01-29 12:07:09 +0000 UTC" firstStartedPulling="2026-01-29 12:07:10.804301373 +0000 UTC m=+162.467281276" lastFinishedPulling="2026-01-29 12:08:57.881422612 +0000 UTC m=+269.544402505" observedRunningTime="2026-01-29 12:09:18.103392161 +0000 UTC m=+289.766372054" watchObservedRunningTime="2026-01-29 12:09:49.721615783 +0000 UTC m=+321.384595696" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.721994 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tnxg7" podStartSLOduration=55.07030207 podStartE2EDuration="2m40.721989373s" podCreationTimestamp="2026-01-29 12:07:09 +0000 UTC" firstStartedPulling="2026-01-29 12:07:11.83523302 +0000 UTC m=+163.498212913" lastFinishedPulling="2026-01-29 12:08:57.486920323 +0000 UTC m=+269.149900216" observedRunningTime="2026-01-29 12:09:18.081336633 +0000 UTC m=+289.744316526" watchObservedRunningTime="2026-01-29 12:09:49.721989373 +0000 UTC m=+321.384969266" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.723347 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vgg49" podStartSLOduration=56.29612517 podStartE2EDuration="2m38.72334055s" podCreationTimestamp="2026-01-29 12:07:11 +0000 UTC" firstStartedPulling="2026-01-29 12:07:14.055147782 +0000 UTC m=+165.718127675" lastFinishedPulling="2026-01-29 12:08:56.482363152 +0000 UTC m=+268.145343055" observedRunningTime="2026-01-29 12:09:17.968442676 +0000 UTC m=+289.631422659" watchObservedRunningTime="2026-01-29 12:09:49.72334055 +0000 UTC m=+321.386320443" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.723529 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pqht4" podStartSLOduration=55.029787246 podStartE2EDuration="2m37.723526055s" podCreationTimestamp="2026-01-29 12:07:12 +0000 UTC" firstStartedPulling="2026-01-29 12:07:15.088235986 +0000 UTC m=+166.751215879" lastFinishedPulling="2026-01-29 12:08:57.781974795 +0000 UTC m=+269.444954688" observedRunningTime="2026-01-29 12:09:18.013496957 +0000 UTC m=+289.676476850" watchObservedRunningTime="2026-01-29 12:09:49.723526055 +0000 UTC m=+321.386505948" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.723989 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sd7t2" podStartSLOduration=57.055755814 podStartE2EDuration="2m40.723981309s" podCreationTimestamp="2026-01-29 12:07:09 +0000 UTC" firstStartedPulling="2026-01-29 12:07:11.830926994 +0000 UTC m=+163.493906887" lastFinishedPulling="2026-01-29 12:08:55.499152489 +0000 UTC m=+267.162132382" observedRunningTime="2026-01-29 12:09:17.951791018 +0000 UTC m=+289.614770911" watchObservedRunningTime="2026-01-29 12:09:49.723981309 +0000 UTC m=+321.386961202" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.724365 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-r6w65" podStartSLOduration=57.218639843 podStartE2EDuration="2m38.724361789s" podCreationTimestamp="2026-01-29 12:07:11 +0000 UTC" firstStartedPulling="2026-01-29 12:07:14.032628532 +0000 UTC m=+165.695608425" lastFinishedPulling="2026-01-29 12:08:55.538350478 +0000 UTC m=+267.201330371" observedRunningTime="2026-01-29 12:09:18.065920059 +0000 UTC m=+289.728899952" watchObservedRunningTime="2026-01-29 12:09:49.724361789 +0000 UTC m=+321.387341682" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.724443 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jj2gn" podStartSLOduration=55.140437539 podStartE2EDuration="2m37.724440351s" podCreationTimestamp="2026-01-29 12:07:12 +0000 UTC" firstStartedPulling="2026-01-29 12:07:15.072049657 +0000 UTC m=+166.735029550" lastFinishedPulling="2026-01-29 12:08:57.656052469 +0000 UTC m=+269.319032362" observedRunningTime="2026-01-29 12:09:18.034301888 +0000 UTC m=+289.697281781" watchObservedRunningTime="2026-01-29 12:09:49.724440351 +0000 UTC m=+321.387420244" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.726025 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.726068 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.730357 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.743227 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=35.743205997 podStartE2EDuration="35.743205997s" podCreationTimestamp="2026-01-29 12:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:09:49.743132725 +0000 UTC m=+321.406112638" watchObservedRunningTime="2026-01-29 12:09:49.743205997 +0000 UTC m=+321.406185890" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.806401 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 29 12:09:49 crc kubenswrapper[4840]: I0129 12:09:49.858314 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.040099 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.148792 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.193380 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.331151 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.456005 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.637759 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.644763 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.741273 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.764930 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.799072 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.856393 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.856438 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.857027 4840 scope.go:117] "RemoveContainer" containerID="9d9660588c73f1797b1abc9bbf7d4c8bb5a45ccc64db803a601f7d68153c6c8f" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.902396 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.963786 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 29 12:09:50 crc kubenswrapper[4840]: I0129 12:09:50.980075 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.012181 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.032685 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.044668 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.085748 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.531225 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cwqw5_660d278f-da23-4033-82de-88c42ef375ed/marketplace-operator/2.log" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.531962 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cwqw5_660d278f-da23-4033-82de-88c42ef375ed/marketplace-operator/1.log" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.532011 4840 generic.go:334] "Generic (PLEG): container finished" podID="660d278f-da23-4033-82de-88c42ef375ed" containerID="c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6" exitCode=1 Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.532042 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" event={"ID":"660d278f-da23-4033-82de-88c42ef375ed","Type":"ContainerDied","Data":"c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6"} Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.532077 4840 scope.go:117] "RemoveContainer" containerID="9d9660588c73f1797b1abc9bbf7d4c8bb5a45ccc64db803a601f7d68153c6c8f" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.532779 4840 scope.go:117] "RemoveContainer" containerID="c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6" Jan 29 12:09:51 crc kubenswrapper[4840]: E0129 12:09:51.533144 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-cwqw5_openshift-marketplace(660d278f-da23-4033-82de-88c42ef375ed)\"" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" podUID="660d278f-da23-4033-82de-88c42ef375ed" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.617613 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.635593 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.838049 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.930089 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.973587 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 29 12:09:51 crc kubenswrapper[4840]: I0129 12:09:51.975979 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 29 12:09:52 crc kubenswrapper[4840]: I0129 12:09:52.077792 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 29 12:09:52 crc kubenswrapper[4840]: I0129 12:09:52.310436 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 29 12:09:52 crc kubenswrapper[4840]: I0129 12:09:52.314380 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 29 12:09:52 crc kubenswrapper[4840]: I0129 12:09:52.340280 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 29 12:09:52 crc kubenswrapper[4840]: I0129 12:09:52.356879 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 29 12:09:52 crc kubenswrapper[4840]: I0129 12:09:52.403269 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 29 12:09:52 crc kubenswrapper[4840]: I0129 12:09:52.542558 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cwqw5_660d278f-da23-4033-82de-88c42ef375ed/marketplace-operator/2.log" Jan 29 12:09:52 crc kubenswrapper[4840]: I0129 12:09:52.581006 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 29 12:09:52 crc kubenswrapper[4840]: I0129 12:09:52.666800 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 29 12:09:52 crc kubenswrapper[4840]: I0129 12:09:52.788532 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 29 12:09:52 crc kubenswrapper[4840]: I0129 12:09:52.891909 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.002393 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.018802 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.018854 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.023157 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.246612 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.258984 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.265515 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.287705 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.289101 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.319562 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.356136 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.493196 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.495127 4840 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.505539 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.534350 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.552583 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.712020 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 29 12:09:53 crc kubenswrapper[4840]: I0129 12:09:53.768637 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 29 12:09:54 crc kubenswrapper[4840]: I0129 12:09:54.002932 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 29 12:09:54 crc kubenswrapper[4840]: I0129 12:09:54.123883 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 29 12:09:54 crc kubenswrapper[4840]: I0129 12:09:54.225123 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 29 12:09:54 crc kubenswrapper[4840]: I0129 12:09:54.366243 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 12:09:54 crc kubenswrapper[4840]: I0129 12:09:54.456655 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 29 12:09:54 crc kubenswrapper[4840]: I0129 12:09:54.458875 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 29 12:09:54 crc kubenswrapper[4840]: I0129 12:09:54.566490 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 29 12:09:54 crc kubenswrapper[4840]: I0129 12:09:54.755277 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 29 12:09:54 crc kubenswrapper[4840]: I0129 12:09:54.869286 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 29 12:09:54 crc kubenswrapper[4840]: I0129 12:09:54.987613 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 12:09:55 crc kubenswrapper[4840]: I0129 12:09:55.175827 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 29 12:09:55 crc kubenswrapper[4840]: I0129 12:09:55.237032 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 29 12:09:55 crc kubenswrapper[4840]: I0129 12:09:55.253543 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 29 12:09:55 crc kubenswrapper[4840]: I0129 12:09:55.398677 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 29 12:09:55 crc kubenswrapper[4840]: I0129 12:09:55.563789 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 29 12:09:55 crc kubenswrapper[4840]: I0129 12:09:55.665874 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 29 12:09:55 crc kubenswrapper[4840]: I0129 12:09:55.743655 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 29 12:09:55 crc kubenswrapper[4840]: I0129 12:09:55.806232 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 29 12:09:55 crc kubenswrapper[4840]: I0129 12:09:55.934793 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 29 12:09:56 crc kubenswrapper[4840]: I0129 12:09:56.034284 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 29 12:09:56 crc kubenswrapper[4840]: I0129 12:09:56.131624 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 29 12:09:56 crc kubenswrapper[4840]: I0129 12:09:56.478067 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 29 12:09:56 crc kubenswrapper[4840]: I0129 12:09:56.560109 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 29 12:09:56 crc kubenswrapper[4840]: I0129 12:09:56.652269 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 29 12:09:56 crc kubenswrapper[4840]: I0129 12:09:56.658692 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 29 12:09:56 crc kubenswrapper[4840]: I0129 12:09:56.841041 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 12:09:56 crc kubenswrapper[4840]: I0129 12:09:56.926306 4840 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 29 12:09:56 crc kubenswrapper[4840]: I0129 12:09:56.929901 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 29 12:09:57 crc kubenswrapper[4840]: I0129 12:09:57.252582 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 12:09:57 crc kubenswrapper[4840]: I0129 12:09:57.303282 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 29 12:09:57 crc kubenswrapper[4840]: I0129 12:09:57.420305 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 29 12:09:57 crc kubenswrapper[4840]: I0129 12:09:57.525105 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 29 12:09:57 crc kubenswrapper[4840]: I0129 12:09:57.736863 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 29 12:09:57 crc kubenswrapper[4840]: I0129 12:09:57.898720 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 29 12:09:58 crc kubenswrapper[4840]: I0129 12:09:58.077729 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 29 12:09:58 crc kubenswrapper[4840]: I0129 12:09:58.084789 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 29 12:09:58 crc kubenswrapper[4840]: I0129 12:09:58.086509 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 29 12:09:58 crc kubenswrapper[4840]: I0129 12:09:58.288821 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 29 12:09:58 crc kubenswrapper[4840]: I0129 12:09:58.444295 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 29 12:09:58 crc kubenswrapper[4840]: I0129 12:09:58.581182 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 29 12:09:58 crc kubenswrapper[4840]: I0129 12:09:58.645797 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 29 12:09:58 crc kubenswrapper[4840]: I0129 12:09:58.664705 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 12:09:58 crc kubenswrapper[4840]: I0129 12:09:58.678732 4840 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 29 12:09:58 crc kubenswrapper[4840]: I0129 12:09:58.681711 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 12:09:58 crc kubenswrapper[4840]: I0129 12:09:58.774041 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 29 12:09:58 crc kubenswrapper[4840]: I0129 12:09:58.950887 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 29 12:09:59 crc kubenswrapper[4840]: I0129 12:09:59.373603 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 12:09:59 crc kubenswrapper[4840]: I0129 12:09:59.598998 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 29 12:09:59 crc kubenswrapper[4840]: I0129 12:09:59.649595 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 29 12:09:59 crc kubenswrapper[4840]: I0129 12:09:59.702663 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 29 12:09:59 crc kubenswrapper[4840]: I0129 12:09:59.794667 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 29 12:09:59 crc kubenswrapper[4840]: I0129 12:09:59.806855 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 29 12:09:59 crc kubenswrapper[4840]: I0129 12:09:59.926833 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 29 12:10:00 crc kubenswrapper[4840]: I0129 12:10:00.060818 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 29 12:10:00 crc kubenswrapper[4840]: I0129 12:10:00.161851 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 29 12:10:00 crc kubenswrapper[4840]: I0129 12:10:00.282547 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 12:10:00 crc kubenswrapper[4840]: I0129 12:10:00.353886 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 29 12:10:00 crc kubenswrapper[4840]: I0129 12:10:00.355362 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 29 12:10:00 crc kubenswrapper[4840]: I0129 12:10:00.430194 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 29 12:10:00 crc kubenswrapper[4840]: I0129 12:10:00.505025 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 29 12:10:00 crc kubenswrapper[4840]: I0129 12:10:00.583142 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 29 12:10:00 crc kubenswrapper[4840]: I0129 12:10:00.856600 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:10:00 crc kubenswrapper[4840]: I0129 12:10:00.856657 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:10:00 crc kubenswrapper[4840]: I0129 12:10:00.857529 4840 scope.go:117] "RemoveContainer" containerID="c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6" Jan 29 12:10:00 crc kubenswrapper[4840]: E0129 12:10:00.857780 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=marketplace-operator pod=marketplace-operator-79b997595-cwqw5_openshift-marketplace(660d278f-da23-4033-82de-88c42ef375ed)\"" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" podUID="660d278f-da23-4033-82de-88c42ef375ed" Jan 29 12:10:00 crc kubenswrapper[4840]: I0129 12:10:00.864476 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.091020 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.142627 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.221731 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.250940 4840 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.251179 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://71e745dc2251868198fe759a0a9d2f3190a3da5c9ddd444a329f2774460401f5" gracePeriod=5 Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.439060 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.628618 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.643790 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.645871 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.771319 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.804386 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.844563 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.909715 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 29 12:10:01 crc kubenswrapper[4840]: I0129 12:10:01.951168 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 29 12:10:02 crc kubenswrapper[4840]: I0129 12:10:02.022085 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 29 12:10:02 crc kubenswrapper[4840]: I0129 12:10:02.343627 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 29 12:10:02 crc kubenswrapper[4840]: I0129 12:10:02.573638 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 29 12:10:02 crc kubenswrapper[4840]: I0129 12:10:02.905832 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 29 12:10:03 crc kubenswrapper[4840]: I0129 12:10:03.056500 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 29 12:10:03 crc kubenswrapper[4840]: I0129 12:10:03.136015 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 29 12:10:03 crc kubenswrapper[4840]: I0129 12:10:03.184344 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 29 12:10:03 crc kubenswrapper[4840]: I0129 12:10:03.195665 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 12:10:03 crc kubenswrapper[4840]: I0129 12:10:03.594478 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 29 12:10:03 crc kubenswrapper[4840]: I0129 12:10:03.823984 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 29 12:10:03 crc kubenswrapper[4840]: I0129 12:10:03.853137 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 29 12:10:03 crc kubenswrapper[4840]: I0129 12:10:03.972750 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 29 12:10:04 crc kubenswrapper[4840]: I0129 12:10:04.669185 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 29 12:10:04 crc kubenswrapper[4840]: I0129 12:10:04.893860 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 29 12:10:04 crc kubenswrapper[4840]: I0129 12:10:04.897214 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.049784 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.252639 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.300767 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.312847 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.316300 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.368801 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.400872 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.500223 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.604785 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.754568 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.757484 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.796849 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.818241 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.918880 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 29 12:10:05 crc kubenswrapper[4840]: I0129 12:10:05.975127 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.053722 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.061121 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.083355 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.086398 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k"] Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.086844 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" podUID="0e647bfe-fa82-470d-9e53-4a0046d446d7" containerName="controller-manager" containerID="cri-o://f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a" gracePeriod=30 Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.156028 4840 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.197199 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc"] Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.197537 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" podUID="4e1d0553-e82a-451d-b551-5a6a5c44f6a1" containerName="route-controller-manager" containerID="cri-o://a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e" gracePeriod=30 Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.248248 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.346915 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.362863 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.404836 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.404967 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.430897 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.532812 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541201 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541286 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541315 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541437 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541464 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541452 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541544 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541576 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541682 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541797 4840 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541817 4840 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541828 4840 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.541841 4840 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.547059 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.552178 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.555279 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.626668 4840 generic.go:334] "Generic (PLEG): container finished" podID="4e1d0553-e82a-451d-b551-5a6a5c44f6a1" containerID="a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e" exitCode=0 Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.626745 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" event={"ID":"4e1d0553-e82a-451d-b551-5a6a5c44f6a1","Type":"ContainerDied","Data":"a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e"} Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.626785 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" event={"ID":"4e1d0553-e82a-451d-b551-5a6a5c44f6a1","Type":"ContainerDied","Data":"b3ead8045a80b61cbe6d0e59dac2937678cceb98833f4024e2808b053a0cec9e"} Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.626805 4840 scope.go:117] "RemoveContainer" containerID="a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.626958 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.630278 4840 generic.go:334] "Generic (PLEG): container finished" podID="0e647bfe-fa82-470d-9e53-4a0046d446d7" containerID="f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a" exitCode=0 Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.630332 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" event={"ID":"0e647bfe-fa82-470d-9e53-4a0046d446d7","Type":"ContainerDied","Data":"f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a"} Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.630360 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" event={"ID":"0e647bfe-fa82-470d-9e53-4a0046d446d7","Type":"ContainerDied","Data":"2d179248275d53df2b35a4c7b59b6ca466bb5aae139be2fd1d51c65aec584c3f"} Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.630424 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.639636 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.639695 4840 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="71e745dc2251868198fe759a0a9d2f3190a3da5c9ddd444a329f2774460401f5" exitCode=137 Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.639812 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.650836 4840 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.653518 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.657356 4840 scope.go:117] "RemoveContainer" containerID="a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e" Jan 29 12:10:06 crc kubenswrapper[4840]: E0129 12:10:06.657800 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e\": container with ID starting with a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e not found: ID does not exist" containerID="a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.657849 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e"} err="failed to get container status \"a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e\": rpc error: code = NotFound desc = could not find container \"a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e\": container with ID starting with a17d5da40e2183beab3e851f7d2e41964e697c74fd263bc2575b7f794544905e not found: ID does not exist" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.657887 4840 scope.go:117] "RemoveContainer" containerID="f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.680325 4840 scope.go:117] "RemoveContainer" containerID="f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a" Jan 29 12:10:06 crc kubenswrapper[4840]: E0129 12:10:06.681077 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a\": container with ID starting with f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a not found: ID does not exist" containerID="f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.681150 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a"} err="failed to get container status \"f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a\": rpc error: code = NotFound desc = could not find container \"f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a\": container with ID starting with f8db0452fc239b854cc4210ce04db1e543a707e774b27f63660ca04a90d68d7a not found: ID does not exist" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.681195 4840 scope.go:117] "RemoveContainer" containerID="71e745dc2251868198fe759a0a9d2f3190a3da5c9ddd444a329f2774460401f5" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.697661 4840 scope.go:117] "RemoveContainer" containerID="71e745dc2251868198fe759a0a9d2f3190a3da5c9ddd444a329f2774460401f5" Jan 29 12:10:06 crc kubenswrapper[4840]: E0129 12:10:06.698344 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71e745dc2251868198fe759a0a9d2f3190a3da5c9ddd444a329f2774460401f5\": container with ID starting with 71e745dc2251868198fe759a0a9d2f3190a3da5c9ddd444a329f2774460401f5 not found: ID does not exist" containerID="71e745dc2251868198fe759a0a9d2f3190a3da5c9ddd444a329f2774460401f5" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.698384 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71e745dc2251868198fe759a0a9d2f3190a3da5c9ddd444a329f2774460401f5"} err="failed to get container status \"71e745dc2251868198fe759a0a9d2f3190a3da5c9ddd444a329f2774460401f5\": rpc error: code = NotFound desc = could not find container \"71e745dc2251868198fe759a0a9d2f3190a3da5c9ddd444a329f2774460401f5\": container with ID starting with 71e745dc2251868198fe759a0a9d2f3190a3da5c9ddd444a329f2774460401f5 not found: ID does not exist" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.752353 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-serving-cert\") pod \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.752439 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr9xd\" (UniqueName: \"kubernetes.io/projected/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-kube-api-access-dr9xd\") pod \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.752498 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-proxy-ca-bundles\") pod \"0e647bfe-fa82-470d-9e53-4a0046d446d7\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.752530 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp4p9\" (UniqueName: \"kubernetes.io/projected/0e647bfe-fa82-470d-9e53-4a0046d446d7-kube-api-access-vp4p9\") pod \"0e647bfe-fa82-470d-9e53-4a0046d446d7\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.752601 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-config\") pod \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.752638 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e647bfe-fa82-470d-9e53-4a0046d446d7-serving-cert\") pod \"0e647bfe-fa82-470d-9e53-4a0046d446d7\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.752664 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-config\") pod \"0e647bfe-fa82-470d-9e53-4a0046d446d7\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.752748 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-client-ca\") pod \"0e647bfe-fa82-470d-9e53-4a0046d446d7\" (UID: \"0e647bfe-fa82-470d-9e53-4a0046d446d7\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.752773 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-client-ca\") pod \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\" (UID: \"4e1d0553-e82a-451d-b551-5a6a5c44f6a1\") " Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.753940 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-client-ca" (OuterVolumeSpecName: "client-ca") pod "4e1d0553-e82a-451d-b551-5a6a5c44f6a1" (UID: "4e1d0553-e82a-451d-b551-5a6a5c44f6a1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.754736 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-config" (OuterVolumeSpecName: "config") pod "4e1d0553-e82a-451d-b551-5a6a5c44f6a1" (UID: "4e1d0553-e82a-451d-b551-5a6a5c44f6a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.754773 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e647bfe-fa82-470d-9e53-4a0046d446d7" (UID: "0e647bfe-fa82-470d-9e53-4a0046d446d7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.754882 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0e647bfe-fa82-470d-9e53-4a0046d446d7" (UID: "0e647bfe-fa82-470d-9e53-4a0046d446d7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.755145 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-config" (OuterVolumeSpecName: "config") pod "0e647bfe-fa82-470d-9e53-4a0046d446d7" (UID: "0e647bfe-fa82-470d-9e53-4a0046d446d7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.760830 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e647bfe-fa82-470d-9e53-4a0046d446d7-kube-api-access-vp4p9" (OuterVolumeSpecName: "kube-api-access-vp4p9") pod "0e647bfe-fa82-470d-9e53-4a0046d446d7" (UID: "0e647bfe-fa82-470d-9e53-4a0046d446d7"). InnerVolumeSpecName "kube-api-access-vp4p9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.761319 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-kube-api-access-dr9xd" (OuterVolumeSpecName: "kube-api-access-dr9xd") pod "4e1d0553-e82a-451d-b551-5a6a5c44f6a1" (UID: "4e1d0553-e82a-451d-b551-5a6a5c44f6a1"). InnerVolumeSpecName "kube-api-access-dr9xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.762551 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4e1d0553-e82a-451d-b551-5a6a5c44f6a1" (UID: "4e1d0553-e82a-451d-b551-5a6a5c44f6a1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.769136 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e647bfe-fa82-470d-9e53-4a0046d446d7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e647bfe-fa82-470d-9e53-4a0046d446d7" (UID: "0e647bfe-fa82-470d-9e53-4a0046d446d7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.769526 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.854602 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.854656 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e647bfe-fa82-470d-9e53-4a0046d446d7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.854669 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.854687 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.854697 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.854707 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.854719 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr9xd\" (UniqueName: \"kubernetes.io/projected/4e1d0553-e82a-451d-b551-5a6a5c44f6a1-kube-api-access-dr9xd\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.854731 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e647bfe-fa82-470d-9e53-4a0046d446d7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.854742 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp4p9\" (UniqueName: \"kubernetes.io/projected/0e647bfe-fa82-470d-9e53-4a0046d446d7-kube-api-access-vp4p9\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.861720 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.965097 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc"] Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.968513 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bd79cd598-rkwxc"] Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.975624 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k"] Jan 29 12:10:06 crc kubenswrapper[4840]: I0129 12:10:06.979880 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8bb9bcff4-vzt7k"] Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.010393 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e647bfe-fa82-470d-9e53-4a0046d446d7" path="/var/lib/kubelet/pods/0e647bfe-fa82-470d-9e53-4a0046d446d7/volumes" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.011221 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e1d0553-e82a-451d-b551-5a6a5c44f6a1" path="/var/lib/kubelet/pods/4e1d0553-e82a-451d-b551-5a6a5c44f6a1/volumes" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.011656 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.011863 4840 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.020476 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.023278 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.023323 4840 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3796637a-8c27-4975-a4a6-509b9d08b385" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.029114 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.029149 4840 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="3796637a-8c27-4975-a4a6-509b9d08b385" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.069231 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.290639 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.683578 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.823619 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.825539 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.857792 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 29 12:10:07 crc kubenswrapper[4840]: I0129 12:10:07.964629 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.068487 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6"] Jan 29 12:10:08 crc kubenswrapper[4840]: E0129 12:10:08.068976 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e1d0553-e82a-451d-b551-5a6a5c44f6a1" containerName="route-controller-manager" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.068996 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e1d0553-e82a-451d-b551-5a6a5c44f6a1" containerName="route-controller-manager" Jan 29 12:10:08 crc kubenswrapper[4840]: E0129 12:10:08.069014 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.069021 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 12:10:08 crc kubenswrapper[4840]: E0129 12:10:08.069037 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" containerName="installer" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.069056 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" containerName="installer" Jan 29 12:10:08 crc kubenswrapper[4840]: E0129 12:10:08.069071 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e647bfe-fa82-470d-9e53-4a0046d446d7" containerName="controller-manager" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.069078 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e647bfe-fa82-470d-9e53-4a0046d446d7" containerName="controller-manager" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.069216 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3262084-4755-4d63-b056-ea1a16cbd8e4" containerName="installer" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.069231 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e1d0553-e82a-451d-b551-5a6a5c44f6a1" containerName="route-controller-manager" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.069240 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.069249 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e647bfe-fa82-470d-9e53-4a0046d446d7" containerName="controller-manager" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.069895 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.072657 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6"] Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.073667 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.073722 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.074403 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.075103 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.075119 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.075294 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.076387 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.077683 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.077790 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.078128 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.078345 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.079010 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.079359 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.085237 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.089199 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6"] Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.096822 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6"] Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.173061 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-client-ca\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.173197 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-config\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.173247 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lv7f\" (UniqueName: \"kubernetes.io/projected/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-kube-api-access-6lv7f\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.173354 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-serving-cert\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.173398 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-proxy-ca-bundles\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.210381 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.238810 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.242796 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.246853 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.274985 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-config\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.275080 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqkgz\" (UniqueName: \"kubernetes.io/projected/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-kube-api-access-zqkgz\") pod \"route-controller-manager-797bd65847-svzf6\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.275136 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lv7f\" (UniqueName: \"kubernetes.io/projected/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-kube-api-access-6lv7f\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.275157 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-config\") pod \"route-controller-manager-797bd65847-svzf6\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.275184 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-serving-cert\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.275208 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-client-ca\") pod \"route-controller-manager-797bd65847-svzf6\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.275535 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-proxy-ca-bundles\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.275873 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-client-ca\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.275931 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-serving-cert\") pod \"route-controller-manager-797bd65847-svzf6\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.277089 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-config\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.277301 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-client-ca\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.277632 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-proxy-ca-bundles\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.280825 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-serving-cert\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.295802 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lv7f\" (UniqueName: \"kubernetes.io/projected/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-kube-api-access-6lv7f\") pod \"controller-manager-6fc6dd89d5-rdkt6\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.378565 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-serving-cert\") pod \"route-controller-manager-797bd65847-svzf6\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.378670 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqkgz\" (UniqueName: \"kubernetes.io/projected/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-kube-api-access-zqkgz\") pod \"route-controller-manager-797bd65847-svzf6\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.378697 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-config\") pod \"route-controller-manager-797bd65847-svzf6\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.378730 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-client-ca\") pod \"route-controller-manager-797bd65847-svzf6\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.379751 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-client-ca\") pod \"route-controller-manager-797bd65847-svzf6\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.380033 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-config\") pod \"route-controller-manager-797bd65847-svzf6\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.383386 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-serving-cert\") pod \"route-controller-manager-797bd65847-svzf6\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.392555 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.406217 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqkgz\" (UniqueName: \"kubernetes.io/projected/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-kube-api-access-zqkgz\") pod \"route-controller-manager-797bd65847-svzf6\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.413905 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.600839 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.652486 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6"] Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.661265 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 29 12:10:08 crc kubenswrapper[4840]: I0129 12:10:08.862370 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6"] Jan 29 12:10:08 crc kubenswrapper[4840]: W0129 12:10:08.862913 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23ef9d4c_dbd5_4db7_9cf0_b7ab2fe6f288.slice/crio-289f43d086b1def0cc7dccbdb9aaad454ce9e09d0c0a392554ba23779799994b WatchSource:0}: Error finding container 289f43d086b1def0cc7dccbdb9aaad454ce9e09d0c0a392554ba23779799994b: Status 404 returned error can't find the container with id 289f43d086b1def0cc7dccbdb9aaad454ce9e09d0c0a392554ba23779799994b Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.038340 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.318583 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.380982 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.453231 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.659470 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.669132 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" event={"ID":"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288","Type":"ContainerStarted","Data":"1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266"} Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.669195 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" event={"ID":"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288","Type":"ContainerStarted","Data":"289f43d086b1def0cc7dccbdb9aaad454ce9e09d0c0a392554ba23779799994b"} Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.670161 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.671826 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" event={"ID":"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8","Type":"ContainerStarted","Data":"eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41"} Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.671867 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" event={"ID":"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8","Type":"ContainerStarted","Data":"370217e295f5d764f3333df4e1458840904310bb2a0c9fd2065345e7d39432ab"} Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.672121 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.675636 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.681428 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.697967 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" podStartSLOduration=3.69791955 podStartE2EDuration="3.69791955s" podCreationTimestamp="2026-01-29 12:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:10:09.696293645 +0000 UTC m=+341.359273538" watchObservedRunningTime="2026-01-29 12:10:09.69791955 +0000 UTC m=+341.360899433" Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.778489 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 29 12:10:09 crc kubenswrapper[4840]: I0129 12:10:09.798437 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 29 12:10:10 crc kubenswrapper[4840]: I0129 12:10:10.006872 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 29 12:10:11 crc kubenswrapper[4840]: I0129 12:10:11.575184 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 29 12:10:11 crc kubenswrapper[4840]: I0129 12:10:11.768264 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 29 12:10:11 crc kubenswrapper[4840]: I0129 12:10:11.979415 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 29 12:10:12 crc kubenswrapper[4840]: I0129 12:10:12.031939 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 29 12:10:12 crc kubenswrapper[4840]: I0129 12:10:12.047222 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 29 12:10:12 crc kubenswrapper[4840]: I0129 12:10:12.196497 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 29 12:10:12 crc kubenswrapper[4840]: I0129 12:10:12.503041 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 29 12:10:14 crc kubenswrapper[4840]: I0129 12:10:14.133002 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 29 12:10:14 crc kubenswrapper[4840]: I0129 12:10:14.477829 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 29 12:10:14 crc kubenswrapper[4840]: I0129 12:10:14.895529 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 29 12:10:15 crc kubenswrapper[4840]: I0129 12:10:15.001499 4840 scope.go:117] "RemoveContainer" containerID="c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6" Jan 29 12:10:15 crc kubenswrapper[4840]: I0129 12:10:15.708599 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cwqw5_660d278f-da23-4033-82de-88c42ef375ed/marketplace-operator/2.log" Jan 29 12:10:15 crc kubenswrapper[4840]: I0129 12:10:15.708978 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" event={"ID":"660d278f-da23-4033-82de-88c42ef375ed","Type":"ContainerStarted","Data":"107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0"} Jan 29 12:10:15 crc kubenswrapper[4840]: I0129 12:10:15.709470 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:10:15 crc kubenswrapper[4840]: I0129 12:10:15.715483 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:10:15 crc kubenswrapper[4840]: I0129 12:10:15.723182 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 29 12:10:15 crc kubenswrapper[4840]: I0129 12:10:15.732381 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" podStartSLOduration=9.73235641 podStartE2EDuration="9.73235641s" podCreationTimestamp="2026-01-29 12:10:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:10:09.745303804 +0000 UTC m=+341.408283697" watchObservedRunningTime="2026-01-29 12:10:15.73235641 +0000 UTC m=+347.395336303" Jan 29 12:10:23 crc kubenswrapper[4840]: I0129 12:10:23.522208 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:10:23 crc kubenswrapper[4840]: I0129 12:10:23.522767 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.097915 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6"] Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.098398 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" podUID="23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288" containerName="controller-manager" containerID="cri-o://1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266" gracePeriod=30 Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.128335 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6"] Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.128626 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" podUID="2e5d97a0-1882-45f9-8d5a-55c2d45e14b8" containerName="route-controller-manager" containerID="cri-o://eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41" gracePeriod=30 Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.669734 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.737166 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.788469 4840 generic.go:334] "Generic (PLEG): container finished" podID="23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288" containerID="1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266" exitCode=0 Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.788591 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.793099 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" event={"ID":"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288","Type":"ContainerDied","Data":"1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266"} Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.793210 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6" event={"ID":"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288","Type":"ContainerDied","Data":"289f43d086b1def0cc7dccbdb9aaad454ce9e09d0c0a392554ba23779799994b"} Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.793267 4840 scope.go:117] "RemoveContainer" containerID="1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.795281 4840 generic.go:334] "Generic (PLEG): container finished" podID="2e5d97a0-1882-45f9-8d5a-55c2d45e14b8" containerID="eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41" exitCode=0 Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.795325 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" event={"ID":"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8","Type":"ContainerDied","Data":"eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41"} Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.795348 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" event={"ID":"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8","Type":"ContainerDied","Data":"370217e295f5d764f3333df4e1458840904310bb2a0c9fd2065345e7d39432ab"} Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.795362 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.822218 4840 scope.go:117] "RemoveContainer" containerID="1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266" Jan 29 12:10:26 crc kubenswrapper[4840]: E0129 12:10:26.826160 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266\": container with ID starting with 1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266 not found: ID does not exist" containerID="1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.826231 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266"} err="failed to get container status \"1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266\": rpc error: code = NotFound desc = could not find container \"1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266\": container with ID starting with 1d2c08a12279a46f5838647cab79c130389f876beb31d8b29d139b6b9f597266 not found: ID does not exist" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.826270 4840 scope.go:117] "RemoveContainer" containerID="eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.839549 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-config\") pod \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.839612 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqkgz\" (UniqueName: \"kubernetes.io/projected/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-kube-api-access-zqkgz\") pod \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.839646 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-client-ca\") pod \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.839689 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-serving-cert\") pod \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.839714 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-serving-cert\") pod \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.839742 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-proxy-ca-bundles\") pod \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.839776 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lv7f\" (UniqueName: \"kubernetes.io/projected/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-kube-api-access-6lv7f\") pod \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.839800 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-client-ca\") pod \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\" (UID: \"2e5d97a0-1882-45f9-8d5a-55c2d45e14b8\") " Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.839840 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-config\") pod \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\" (UID: \"23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288\") " Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.841373 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-client-ca" (OuterVolumeSpecName: "client-ca") pod "23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288" (UID: "23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.841751 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-client-ca" (OuterVolumeSpecName: "client-ca") pod "2e5d97a0-1882-45f9-8d5a-55c2d45e14b8" (UID: "2e5d97a0-1882-45f9-8d5a-55c2d45e14b8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.842959 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288" (UID: "23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.843091 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-config" (OuterVolumeSpecName: "config") pod "2e5d97a0-1882-45f9-8d5a-55c2d45e14b8" (UID: "2e5d97a0-1882-45f9-8d5a-55c2d45e14b8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.843519 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-config" (OuterVolumeSpecName: "config") pod "23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288" (UID: "23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.843806 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.843841 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.843858 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.843901 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.843912 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.846589 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288" (UID: "23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.846618 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2e5d97a0-1882-45f9-8d5a-55c2d45e14b8" (UID: "2e5d97a0-1882-45f9-8d5a-55c2d45e14b8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.846660 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-kube-api-access-6lv7f" (OuterVolumeSpecName: "kube-api-access-6lv7f") pod "23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288" (UID: "23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288"). InnerVolumeSpecName "kube-api-access-6lv7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.847105 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-kube-api-access-zqkgz" (OuterVolumeSpecName: "kube-api-access-zqkgz") pod "2e5d97a0-1882-45f9-8d5a-55c2d45e14b8" (UID: "2e5d97a0-1882-45f9-8d5a-55c2d45e14b8"). InnerVolumeSpecName "kube-api-access-zqkgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.852030 4840 scope.go:117] "RemoveContainer" containerID="eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41" Jan 29 12:10:26 crc kubenswrapper[4840]: E0129 12:10:26.852560 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41\": container with ID starting with eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41 not found: ID does not exist" containerID="eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.852609 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41"} err="failed to get container status \"eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41\": rpc error: code = NotFound desc = could not find container \"eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41\": container with ID starting with eed46fa025f30538f5c3208d3402b6de4ab4035435741268993390c839111f41 not found: ID does not exist" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.945265 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqkgz\" (UniqueName: \"kubernetes.io/projected/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-kube-api-access-zqkgz\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.945314 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.945337 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:26 crc kubenswrapper[4840]: I0129 12:10:26.945357 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lv7f\" (UniqueName: \"kubernetes.io/projected/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288-kube-api-access-6lv7f\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:27 crc kubenswrapper[4840]: I0129 12:10:27.107005 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6"] Jan 29 12:10:27 crc kubenswrapper[4840]: I0129 12:10:27.116013 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6fc6dd89d5-rdkt6"] Jan 29 12:10:27 crc kubenswrapper[4840]: I0129 12:10:27.126983 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6"] Jan 29 12:10:27 crc kubenswrapper[4840]: I0129 12:10:27.131090 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-797bd65847-svzf6"] Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.095355 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5c8985447-tfqrv"] Jan 29 12:10:28 crc kubenswrapper[4840]: E0129 12:10:28.096091 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e5d97a0-1882-45f9-8d5a-55c2d45e14b8" containerName="route-controller-manager" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.096110 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e5d97a0-1882-45f9-8d5a-55c2d45e14b8" containerName="route-controller-manager" Jan 29 12:10:28 crc kubenswrapper[4840]: E0129 12:10:28.096121 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288" containerName="controller-manager" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.096129 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288" containerName="controller-manager" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.096269 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e5d97a0-1882-45f9-8d5a-55c2d45e14b8" containerName="route-controller-manager" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.096284 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288" containerName="controller-manager" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.096916 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.102484 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.102696 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.102940 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.103122 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.103297 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.104043 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn"] Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.109507 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.111821 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.115033 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c8985447-tfqrv"] Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.116687 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.116915 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.117809 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.118311 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.118459 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.118687 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.119921 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn"] Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.121294 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.165135 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015cd172-e294-454b-b521-fa058d6bb518-serving-cert\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.165230 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-config\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.165263 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-proxy-ca-bundles\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.165297 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crt76\" (UniqueName: \"kubernetes.io/projected/015cd172-e294-454b-b521-fa058d6bb518-kube-api-access-crt76\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.165328 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018b4408-551c-4a0d-95cf-d1eaf9eab223-config\") pod \"route-controller-manager-785fdfb9bc-bbbbn\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.165363 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018b4408-551c-4a0d-95cf-d1eaf9eab223-client-ca\") pod \"route-controller-manager-785fdfb9bc-bbbbn\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.165385 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-client-ca\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.165428 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018b4408-551c-4a0d-95cf-d1eaf9eab223-serving-cert\") pod \"route-controller-manager-785fdfb9bc-bbbbn\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.165454 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9g6s\" (UniqueName: \"kubernetes.io/projected/018b4408-551c-4a0d-95cf-d1eaf9eab223-kube-api-access-f9g6s\") pod \"route-controller-manager-785fdfb9bc-bbbbn\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.266499 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015cd172-e294-454b-b521-fa058d6bb518-serving-cert\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.266570 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-config\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.266595 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-proxy-ca-bundles\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.266617 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crt76\" (UniqueName: \"kubernetes.io/projected/015cd172-e294-454b-b521-fa058d6bb518-kube-api-access-crt76\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.266642 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018b4408-551c-4a0d-95cf-d1eaf9eab223-config\") pod \"route-controller-manager-785fdfb9bc-bbbbn\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.266664 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018b4408-551c-4a0d-95cf-d1eaf9eab223-client-ca\") pod \"route-controller-manager-785fdfb9bc-bbbbn\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.266683 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-client-ca\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.266714 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018b4408-551c-4a0d-95cf-d1eaf9eab223-serving-cert\") pod \"route-controller-manager-785fdfb9bc-bbbbn\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.266734 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9g6s\" (UniqueName: \"kubernetes.io/projected/018b4408-551c-4a0d-95cf-d1eaf9eab223-kube-api-access-f9g6s\") pod \"route-controller-manager-785fdfb9bc-bbbbn\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.268108 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-client-ca\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.268777 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018b4408-551c-4a0d-95cf-d1eaf9eab223-client-ca\") pod \"route-controller-manager-785fdfb9bc-bbbbn\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.269529 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018b4408-551c-4a0d-95cf-d1eaf9eab223-config\") pod \"route-controller-manager-785fdfb9bc-bbbbn\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.269552 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-config\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.270288 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-proxy-ca-bundles\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.274721 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018b4408-551c-4a0d-95cf-d1eaf9eab223-serving-cert\") pod \"route-controller-manager-785fdfb9bc-bbbbn\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.277907 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015cd172-e294-454b-b521-fa058d6bb518-serving-cert\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.288385 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9g6s\" (UniqueName: \"kubernetes.io/projected/018b4408-551c-4a0d-95cf-d1eaf9eab223-kube-api-access-f9g6s\") pod \"route-controller-manager-785fdfb9bc-bbbbn\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.291122 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crt76\" (UniqueName: \"kubernetes.io/projected/015cd172-e294-454b-b521-fa058d6bb518-kube-api-access-crt76\") pod \"controller-manager-5c8985447-tfqrv\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.415851 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.437084 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.835300 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5c8985447-tfqrv"] Jan 29 12:10:28 crc kubenswrapper[4840]: W0129 12:10:28.839688 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod015cd172_e294_454b_b521_fa058d6bb518.slice/crio-45e153d636ea4a29251a89f6d00adedafb4a66b36987dd8c2ed51126d6c4686a WatchSource:0}: Error finding container 45e153d636ea4a29251a89f6d00adedafb4a66b36987dd8c2ed51126d6c4686a: Status 404 returned error can't find the container with id 45e153d636ea4a29251a89f6d00adedafb4a66b36987dd8c2ed51126d6c4686a Jan 29 12:10:28 crc kubenswrapper[4840]: I0129 12:10:28.887746 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn"] Jan 29 12:10:28 crc kubenswrapper[4840]: W0129 12:10:28.889849 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod018b4408_551c_4a0d_95cf_d1eaf9eab223.slice/crio-41d2da430433322ccff3d95d5fb4fdd90a8e6e9256a0acae307b2dee5ca3bd8f WatchSource:0}: Error finding container 41d2da430433322ccff3d95d5fb4fdd90a8e6e9256a0acae307b2dee5ca3bd8f: Status 404 returned error can't find the container with id 41d2da430433322ccff3d95d5fb4fdd90a8e6e9256a0acae307b2dee5ca3bd8f Jan 29 12:10:29 crc kubenswrapper[4840]: I0129 12:10:29.013741 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288" path="/var/lib/kubelet/pods/23ef9d4c-dbd5-4db7-9cf0-b7ab2fe6f288/volumes" Jan 29 12:10:29 crc kubenswrapper[4840]: I0129 12:10:29.014377 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e5d97a0-1882-45f9-8d5a-55c2d45e14b8" path="/var/lib/kubelet/pods/2e5d97a0-1882-45f9-8d5a-55c2d45e14b8/volumes" Jan 29 12:10:29 crc kubenswrapper[4840]: I0129 12:10:29.816127 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" event={"ID":"018b4408-551c-4a0d-95cf-d1eaf9eab223","Type":"ContainerStarted","Data":"649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba"} Jan 29 12:10:29 crc kubenswrapper[4840]: I0129 12:10:29.816473 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" event={"ID":"018b4408-551c-4a0d-95cf-d1eaf9eab223","Type":"ContainerStarted","Data":"41d2da430433322ccff3d95d5fb4fdd90a8e6e9256a0acae307b2dee5ca3bd8f"} Jan 29 12:10:29 crc kubenswrapper[4840]: I0129 12:10:29.816493 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:29 crc kubenswrapper[4840]: I0129 12:10:29.817538 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" event={"ID":"015cd172-e294-454b-b521-fa058d6bb518","Type":"ContainerStarted","Data":"82300d8b07072ce82dd285ce77cd748475ac7550482b7e4cea6a448a8bae6d1c"} Jan 29 12:10:29 crc kubenswrapper[4840]: I0129 12:10:29.817561 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" event={"ID":"015cd172-e294-454b-b521-fa058d6bb518","Type":"ContainerStarted","Data":"45e153d636ea4a29251a89f6d00adedafb4a66b36987dd8c2ed51126d6c4686a"} Jan 29 12:10:29 crc kubenswrapper[4840]: I0129 12:10:29.817730 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:29 crc kubenswrapper[4840]: I0129 12:10:29.824013 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:10:29 crc kubenswrapper[4840]: I0129 12:10:29.825974 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:10:29 crc kubenswrapper[4840]: I0129 12:10:29.837745 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" podStartSLOduration=3.837725946 podStartE2EDuration="3.837725946s" podCreationTimestamp="2026-01-29 12:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:10:29.83675113 +0000 UTC m=+361.499731023" watchObservedRunningTime="2026-01-29 12:10:29.837725946 +0000 UTC m=+361.500705839" Jan 29 12:10:29 crc kubenswrapper[4840]: I0129 12:10:29.855484 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" podStartSLOduration=3.8554611850000002 podStartE2EDuration="3.855461185s" podCreationTimestamp="2026-01-29 12:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:10:29.853715527 +0000 UTC m=+361.516695420" watchObservedRunningTime="2026-01-29 12:10:29.855461185 +0000 UTC m=+361.518441078" Jan 29 12:10:31 crc kubenswrapper[4840]: I0129 12:10:31.166288 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 29 12:10:34 crc kubenswrapper[4840]: I0129 12:10:34.650861 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 29 12:10:37 crc kubenswrapper[4840]: I0129 12:10:37.260509 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 29 12:10:37 crc kubenswrapper[4840]: I0129 12:10:37.809389 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 29 12:10:38 crc kubenswrapper[4840]: I0129 12:10:38.097528 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 29 12:10:42 crc kubenswrapper[4840]: I0129 12:10:42.779310 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tnxg7"] Jan 29 12:10:42 crc kubenswrapper[4840]: I0129 12:10:42.780296 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tnxg7" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" containerName="registry-server" containerID="cri-o://9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273" gracePeriod=2 Jan 29 12:10:42 crc kubenswrapper[4840]: I0129 12:10:42.976371 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sd7t2"] Jan 29 12:10:42 crc kubenswrapper[4840]: I0129 12:10:42.977503 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sd7t2" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" containerName="registry-server" containerID="cri-o://5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011" gracePeriod=2 Jan 29 12:10:43 crc kubenswrapper[4840]: E0129 12:10:43.027929 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93c0bb8f_812d_4bf4_93c4_7ccd557d326d.slice/crio-5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011.scope\": RecentStats: unable to find data in memory cache]" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.047137 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.261076 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.285633 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-catalog-content\") pod \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\" (UID: \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\") " Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.285682 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-utilities\") pod \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\" (UID: \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\") " Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.285750 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6dpt\" (UniqueName: \"kubernetes.io/projected/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-kube-api-access-z6dpt\") pod \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\" (UID: \"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4\") " Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.288998 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-utilities" (OuterVolumeSpecName: "utilities") pod "dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" (UID: "dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.298233 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-kube-api-access-z6dpt" (OuterVolumeSpecName: "kube-api-access-z6dpt") pod "dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" (UID: "dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4"). InnerVolumeSpecName "kube-api-access-z6dpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.346347 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" (UID: "dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.387203 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6dpt\" (UniqueName: \"kubernetes.io/projected/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-kube-api-access-z6dpt\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.387253 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.387264 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.436059 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.487621 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-catalog-content\") pod \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\" (UID: \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\") " Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.487691 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-utilities\") pod \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\" (UID: \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\") " Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.487725 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnck2\" (UniqueName: \"kubernetes.io/projected/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-kube-api-access-vnck2\") pod \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\" (UID: \"93c0bb8f-812d-4bf4-93c4-7ccd557d326d\") " Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.488798 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-utilities" (OuterVolumeSpecName: "utilities") pod "93c0bb8f-812d-4bf4-93c4-7ccd557d326d" (UID: "93c0bb8f-812d-4bf4-93c4-7ccd557d326d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.491993 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-kube-api-access-vnck2" (OuterVolumeSpecName: "kube-api-access-vnck2") pod "93c0bb8f-812d-4bf4-93c4-7ccd557d326d" (UID: "93c0bb8f-812d-4bf4-93c4-7ccd557d326d"). InnerVolumeSpecName "kube-api-access-vnck2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.538726 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "93c0bb8f-812d-4bf4-93c4-7ccd557d326d" (UID: "93c0bb8f-812d-4bf4-93c4-7ccd557d326d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.589892 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.590062 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.590081 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vnck2\" (UniqueName: \"kubernetes.io/projected/93c0bb8f-812d-4bf4-93c4-7ccd557d326d-kube-api-access-vnck2\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.906364 4840 generic.go:334] "Generic (PLEG): container finished" podID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" containerID="5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011" exitCode=0 Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.906439 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sd7t2" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.906450 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd7t2" event={"ID":"93c0bb8f-812d-4bf4-93c4-7ccd557d326d","Type":"ContainerDied","Data":"5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011"} Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.907002 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sd7t2" event={"ID":"93c0bb8f-812d-4bf4-93c4-7ccd557d326d","Type":"ContainerDied","Data":"842b1028f7332e1f79432690a345d79e44f22df7252e7aa42f4a4179a37e8df0"} Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.907035 4840 scope.go:117] "RemoveContainer" containerID="5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.922429 4840 generic.go:334] "Generic (PLEG): container finished" podID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" containerID="9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273" exitCode=0 Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.922497 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxg7" event={"ID":"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4","Type":"ContainerDied","Data":"9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273"} Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.922526 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tnxg7" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.922545 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tnxg7" event={"ID":"dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4","Type":"ContainerDied","Data":"21644efa53e305ecf4837889d2a6b6df7c19ec6b36bd677dd865362913f7efca"} Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.931361 4840 scope.go:117] "RemoveContainer" containerID="5b4612481dc01e969ca63e7ee7a274eb041aab3f297e28b4e2b61b7366e2dbab" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.955737 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sd7t2"] Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.960113 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sd7t2"] Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.963510 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tnxg7"] Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.967339 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tnxg7"] Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.969873 4840 scope.go:117] "RemoveContainer" containerID="e069aa2325d64d42fc5d8cd586991846ed147a292f5d22b8c6abe3b22827f0c2" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.984535 4840 scope.go:117] "RemoveContainer" containerID="5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011" Jan 29 12:10:43 crc kubenswrapper[4840]: E0129 12:10:43.985014 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011\": container with ID starting with 5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011 not found: ID does not exist" containerID="5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.985052 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011"} err="failed to get container status \"5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011\": rpc error: code = NotFound desc = could not find container \"5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011\": container with ID starting with 5cb53957b3f0cfe705a5dc352e71d9fcfb2ff42bddfd1ff14d858ac290dc5011 not found: ID does not exist" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.985086 4840 scope.go:117] "RemoveContainer" containerID="5b4612481dc01e969ca63e7ee7a274eb041aab3f297e28b4e2b61b7366e2dbab" Jan 29 12:10:43 crc kubenswrapper[4840]: E0129 12:10:43.985777 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b4612481dc01e969ca63e7ee7a274eb041aab3f297e28b4e2b61b7366e2dbab\": container with ID starting with 5b4612481dc01e969ca63e7ee7a274eb041aab3f297e28b4e2b61b7366e2dbab not found: ID does not exist" containerID="5b4612481dc01e969ca63e7ee7a274eb041aab3f297e28b4e2b61b7366e2dbab" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.985811 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b4612481dc01e969ca63e7ee7a274eb041aab3f297e28b4e2b61b7366e2dbab"} err="failed to get container status \"5b4612481dc01e969ca63e7ee7a274eb041aab3f297e28b4e2b61b7366e2dbab\": rpc error: code = NotFound desc = could not find container \"5b4612481dc01e969ca63e7ee7a274eb041aab3f297e28b4e2b61b7366e2dbab\": container with ID starting with 5b4612481dc01e969ca63e7ee7a274eb041aab3f297e28b4e2b61b7366e2dbab not found: ID does not exist" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.985832 4840 scope.go:117] "RemoveContainer" containerID="e069aa2325d64d42fc5d8cd586991846ed147a292f5d22b8c6abe3b22827f0c2" Jan 29 12:10:43 crc kubenswrapper[4840]: E0129 12:10:43.987842 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e069aa2325d64d42fc5d8cd586991846ed147a292f5d22b8c6abe3b22827f0c2\": container with ID starting with e069aa2325d64d42fc5d8cd586991846ed147a292f5d22b8c6abe3b22827f0c2 not found: ID does not exist" containerID="e069aa2325d64d42fc5d8cd586991846ed147a292f5d22b8c6abe3b22827f0c2" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.987875 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e069aa2325d64d42fc5d8cd586991846ed147a292f5d22b8c6abe3b22827f0c2"} err="failed to get container status \"e069aa2325d64d42fc5d8cd586991846ed147a292f5d22b8c6abe3b22827f0c2\": rpc error: code = NotFound desc = could not find container \"e069aa2325d64d42fc5d8cd586991846ed147a292f5d22b8c6abe3b22827f0c2\": container with ID starting with e069aa2325d64d42fc5d8cd586991846ed147a292f5d22b8c6abe3b22827f0c2 not found: ID does not exist" Jan 29 12:10:43 crc kubenswrapper[4840]: I0129 12:10:43.987894 4840 scope.go:117] "RemoveContainer" containerID="9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273" Jan 29 12:10:44 crc kubenswrapper[4840]: I0129 12:10:44.004520 4840 scope.go:117] "RemoveContainer" containerID="cdefe3951d5da6e4329c47fa3a7d015de8030f0dc7e42cd6a1051e49e4d77646" Jan 29 12:10:44 crc kubenswrapper[4840]: I0129 12:10:44.020647 4840 scope.go:117] "RemoveContainer" containerID="06920e79c7c8b817eee3166d052286186022d12805f9c627d23029c131ef345a" Jan 29 12:10:44 crc kubenswrapper[4840]: I0129 12:10:44.036149 4840 scope.go:117] "RemoveContainer" containerID="9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273" Jan 29 12:10:44 crc kubenswrapper[4840]: E0129 12:10:44.036797 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273\": container with ID starting with 9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273 not found: ID does not exist" containerID="9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273" Jan 29 12:10:44 crc kubenswrapper[4840]: I0129 12:10:44.036836 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273"} err="failed to get container status \"9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273\": rpc error: code = NotFound desc = could not find container \"9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273\": container with ID starting with 9675e46dcde6dee23b2b347babe2ae7265d24b902982ab8c9bb99fbb79f71273 not found: ID does not exist" Jan 29 12:10:44 crc kubenswrapper[4840]: I0129 12:10:44.036866 4840 scope.go:117] "RemoveContainer" containerID="cdefe3951d5da6e4329c47fa3a7d015de8030f0dc7e42cd6a1051e49e4d77646" Jan 29 12:10:44 crc kubenswrapper[4840]: E0129 12:10:44.037579 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdefe3951d5da6e4329c47fa3a7d015de8030f0dc7e42cd6a1051e49e4d77646\": container with ID starting with cdefe3951d5da6e4329c47fa3a7d015de8030f0dc7e42cd6a1051e49e4d77646 not found: ID does not exist" containerID="cdefe3951d5da6e4329c47fa3a7d015de8030f0dc7e42cd6a1051e49e4d77646" Jan 29 12:10:44 crc kubenswrapper[4840]: I0129 12:10:44.037612 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdefe3951d5da6e4329c47fa3a7d015de8030f0dc7e42cd6a1051e49e4d77646"} err="failed to get container status \"cdefe3951d5da6e4329c47fa3a7d015de8030f0dc7e42cd6a1051e49e4d77646\": rpc error: code = NotFound desc = could not find container \"cdefe3951d5da6e4329c47fa3a7d015de8030f0dc7e42cd6a1051e49e4d77646\": container with ID starting with cdefe3951d5da6e4329c47fa3a7d015de8030f0dc7e42cd6a1051e49e4d77646 not found: ID does not exist" Jan 29 12:10:44 crc kubenswrapper[4840]: I0129 12:10:44.037631 4840 scope.go:117] "RemoveContainer" containerID="06920e79c7c8b817eee3166d052286186022d12805f9c627d23029c131ef345a" Jan 29 12:10:44 crc kubenswrapper[4840]: E0129 12:10:44.038144 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06920e79c7c8b817eee3166d052286186022d12805f9c627d23029c131ef345a\": container with ID starting with 06920e79c7c8b817eee3166d052286186022d12805f9c627d23029c131ef345a not found: ID does not exist" containerID="06920e79c7c8b817eee3166d052286186022d12805f9c627d23029c131ef345a" Jan 29 12:10:44 crc kubenswrapper[4840]: I0129 12:10:44.038263 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06920e79c7c8b817eee3166d052286186022d12805f9c627d23029c131ef345a"} err="failed to get container status \"06920e79c7c8b817eee3166d052286186022d12805f9c627d23029c131ef345a\": rpc error: code = NotFound desc = could not find container \"06920e79c7c8b817eee3166d052286186022d12805f9c627d23029c131ef345a\": container with ID starting with 06920e79c7c8b817eee3166d052286186022d12805f9c627d23029c131ef345a not found: ID does not exist" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.016390 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" path="/var/lib/kubelet/pods/93c0bb8f-812d-4bf4-93c4-7ccd557d326d/volumes" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.018732 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" path="/var/lib/kubelet/pods/dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4/volumes" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.177400 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgg49"] Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.177698 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vgg49" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" containerName="registry-server" containerID="cri-o://47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288" gracePeriod=2 Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.375305 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pqht4"] Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.375616 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pqht4" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" containerName="registry-server" containerID="cri-o://8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e" gracePeriod=2 Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.699828 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.832469 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcb94932-ec4e-48f5-bda1-51aa89a53087-catalog-content\") pod \"fcb94932-ec4e-48f5-bda1-51aa89a53087\" (UID: \"fcb94932-ec4e-48f5-bda1-51aa89a53087\") " Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.832555 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcb94932-ec4e-48f5-bda1-51aa89a53087-utilities\") pod \"fcb94932-ec4e-48f5-bda1-51aa89a53087\" (UID: \"fcb94932-ec4e-48f5-bda1-51aa89a53087\") " Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.832649 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kndxl\" (UniqueName: \"kubernetes.io/projected/fcb94932-ec4e-48f5-bda1-51aa89a53087-kube-api-access-kndxl\") pod \"fcb94932-ec4e-48f5-bda1-51aa89a53087\" (UID: \"fcb94932-ec4e-48f5-bda1-51aa89a53087\") " Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.834196 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcb94932-ec4e-48f5-bda1-51aa89a53087-utilities" (OuterVolumeSpecName: "utilities") pod "fcb94932-ec4e-48f5-bda1-51aa89a53087" (UID: "fcb94932-ec4e-48f5-bda1-51aa89a53087"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.842240 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcb94932-ec4e-48f5-bda1-51aa89a53087-kube-api-access-kndxl" (OuterVolumeSpecName: "kube-api-access-kndxl") pod "fcb94932-ec4e-48f5-bda1-51aa89a53087" (UID: "fcb94932-ec4e-48f5-bda1-51aa89a53087"). InnerVolumeSpecName "kube-api-access-kndxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.855985 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcb94932-ec4e-48f5-bda1-51aa89a53087-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcb94932-ec4e-48f5-bda1-51aa89a53087" (UID: "fcb94932-ec4e-48f5-bda1-51aa89a53087"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.881387 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.934027 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcb94932-ec4e-48f5-bda1-51aa89a53087-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.934379 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcb94932-ec4e-48f5-bda1-51aa89a53087-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.934444 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kndxl\" (UniqueName: \"kubernetes.io/projected/fcb94932-ec4e-48f5-bda1-51aa89a53087-kube-api-access-kndxl\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.947885 4840 generic.go:334] "Generic (PLEG): container finished" podID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" containerID="8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e" exitCode=0 Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.948176 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pqht4" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.948963 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqht4" event={"ID":"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e","Type":"ContainerDied","Data":"8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e"} Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.949066 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pqht4" event={"ID":"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e","Type":"ContainerDied","Data":"14ec9233491bc54db1e03c95cf5f9ff671a44768876c89a07adb11579276a858"} Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.949095 4840 scope.go:117] "RemoveContainer" containerID="8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.956358 4840 generic.go:334] "Generic (PLEG): container finished" podID="fcb94932-ec4e-48f5-bda1-51aa89a53087" containerID="47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288" exitCode=0 Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.956415 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgg49" event={"ID":"fcb94932-ec4e-48f5-bda1-51aa89a53087","Type":"ContainerDied","Data":"47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288"} Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.956461 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vgg49" event={"ID":"fcb94932-ec4e-48f5-bda1-51aa89a53087","Type":"ContainerDied","Data":"b394c303327cba9606513796f8e6a6d35b6589ffda17a0f2a0c89cee68c0c854"} Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.956521 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vgg49" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.975366 4840 scope.go:117] "RemoveContainer" containerID="2b1654be74720991118ecfd057e255c85d95ef2b91110007c21c65b0e370c790" Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.987652 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgg49"] Jan 29 12:10:45 crc kubenswrapper[4840]: I0129 12:10:45.991796 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vgg49"] Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.002086 4840 scope.go:117] "RemoveContainer" containerID="a3b75d2df9ccf2c2173cd489118f8275ccc7f70406d9e7da015329c74e111eb6" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.035249 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-utilities\") pod \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\" (UID: \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\") " Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.035315 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jznq\" (UniqueName: \"kubernetes.io/projected/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-kube-api-access-6jznq\") pod \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\" (UID: \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\") " Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.035399 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-catalog-content\") pod \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\" (UID: \"a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e\") " Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.036167 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-utilities" (OuterVolumeSpecName: "utilities") pod "a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" (UID: "a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.038129 4840 scope.go:117] "RemoveContainer" containerID="8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.038141 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-kube-api-access-6jznq" (OuterVolumeSpecName: "kube-api-access-6jznq") pod "a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" (UID: "a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e"). InnerVolumeSpecName "kube-api-access-6jznq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:10:46 crc kubenswrapper[4840]: E0129 12:10:46.038799 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e\": container with ID starting with 8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e not found: ID does not exist" containerID="8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.038835 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e"} err="failed to get container status \"8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e\": rpc error: code = NotFound desc = could not find container \"8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e\": container with ID starting with 8568b3f6c3d24297f93e71cf48f98ab27af265700d49be732ebb5b631fb5229e not found: ID does not exist" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.038858 4840 scope.go:117] "RemoveContainer" containerID="2b1654be74720991118ecfd057e255c85d95ef2b91110007c21c65b0e370c790" Jan 29 12:10:46 crc kubenswrapper[4840]: E0129 12:10:46.039165 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b1654be74720991118ecfd057e255c85d95ef2b91110007c21c65b0e370c790\": container with ID starting with 2b1654be74720991118ecfd057e255c85d95ef2b91110007c21c65b0e370c790 not found: ID does not exist" containerID="2b1654be74720991118ecfd057e255c85d95ef2b91110007c21c65b0e370c790" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.039205 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b1654be74720991118ecfd057e255c85d95ef2b91110007c21c65b0e370c790"} err="failed to get container status \"2b1654be74720991118ecfd057e255c85d95ef2b91110007c21c65b0e370c790\": rpc error: code = NotFound desc = could not find container \"2b1654be74720991118ecfd057e255c85d95ef2b91110007c21c65b0e370c790\": container with ID starting with 2b1654be74720991118ecfd057e255c85d95ef2b91110007c21c65b0e370c790 not found: ID does not exist" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.039234 4840 scope.go:117] "RemoveContainer" containerID="a3b75d2df9ccf2c2173cd489118f8275ccc7f70406d9e7da015329c74e111eb6" Jan 29 12:10:46 crc kubenswrapper[4840]: E0129 12:10:46.039549 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b75d2df9ccf2c2173cd489118f8275ccc7f70406d9e7da015329c74e111eb6\": container with ID starting with a3b75d2df9ccf2c2173cd489118f8275ccc7f70406d9e7da015329c74e111eb6 not found: ID does not exist" containerID="a3b75d2df9ccf2c2173cd489118f8275ccc7f70406d9e7da015329c74e111eb6" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.039576 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b75d2df9ccf2c2173cd489118f8275ccc7f70406d9e7da015329c74e111eb6"} err="failed to get container status \"a3b75d2df9ccf2c2173cd489118f8275ccc7f70406d9e7da015329c74e111eb6\": rpc error: code = NotFound desc = could not find container \"a3b75d2df9ccf2c2173cd489118f8275ccc7f70406d9e7da015329c74e111eb6\": container with ID starting with a3b75d2df9ccf2c2173cd489118f8275ccc7f70406d9e7da015329c74e111eb6 not found: ID does not exist" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.039591 4840 scope.go:117] "RemoveContainer" containerID="47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.057470 4840 scope.go:117] "RemoveContainer" containerID="d7b23facd586db9e61559a528705c88c0dad8d70fafc2bc87ad2615968156242" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.078424 4840 scope.go:117] "RemoveContainer" containerID="3bb438cb58947bcaeed5973bb148b1728fb93dcc1033ad5125129eb50b967051" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.097311 4840 scope.go:117] "RemoveContainer" containerID="47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288" Jan 29 12:10:46 crc kubenswrapper[4840]: E0129 12:10:46.097926 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288\": container with ID starting with 47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288 not found: ID does not exist" containerID="47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.098004 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288"} err="failed to get container status \"47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288\": rpc error: code = NotFound desc = could not find container \"47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288\": container with ID starting with 47d66f94b3a703dfd28270053f9e27a443b712c3e371a601350e281efd537288 not found: ID does not exist" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.098043 4840 scope.go:117] "RemoveContainer" containerID="d7b23facd586db9e61559a528705c88c0dad8d70fafc2bc87ad2615968156242" Jan 29 12:10:46 crc kubenswrapper[4840]: E0129 12:10:46.099369 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b23facd586db9e61559a528705c88c0dad8d70fafc2bc87ad2615968156242\": container with ID starting with d7b23facd586db9e61559a528705c88c0dad8d70fafc2bc87ad2615968156242 not found: ID does not exist" containerID="d7b23facd586db9e61559a528705c88c0dad8d70fafc2bc87ad2615968156242" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.099405 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b23facd586db9e61559a528705c88c0dad8d70fafc2bc87ad2615968156242"} err="failed to get container status \"d7b23facd586db9e61559a528705c88c0dad8d70fafc2bc87ad2615968156242\": rpc error: code = NotFound desc = could not find container \"d7b23facd586db9e61559a528705c88c0dad8d70fafc2bc87ad2615968156242\": container with ID starting with d7b23facd586db9e61559a528705c88c0dad8d70fafc2bc87ad2615968156242 not found: ID does not exist" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.099423 4840 scope.go:117] "RemoveContainer" containerID="3bb438cb58947bcaeed5973bb148b1728fb93dcc1033ad5125129eb50b967051" Jan 29 12:10:46 crc kubenswrapper[4840]: E0129 12:10:46.099731 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bb438cb58947bcaeed5973bb148b1728fb93dcc1033ad5125129eb50b967051\": container with ID starting with 3bb438cb58947bcaeed5973bb148b1728fb93dcc1033ad5125129eb50b967051 not found: ID does not exist" containerID="3bb438cb58947bcaeed5973bb148b1728fb93dcc1033ad5125129eb50b967051" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.099764 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bb438cb58947bcaeed5973bb148b1728fb93dcc1033ad5125129eb50b967051"} err="failed to get container status \"3bb438cb58947bcaeed5973bb148b1728fb93dcc1033ad5125129eb50b967051\": rpc error: code = NotFound desc = could not find container \"3bb438cb58947bcaeed5973bb148b1728fb93dcc1033ad5125129eb50b967051\": container with ID starting with 3bb438cb58947bcaeed5973bb148b1728fb93dcc1033ad5125129eb50b967051 not found: ID does not exist" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.137524 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.137588 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jznq\" (UniqueName: \"kubernetes.io/projected/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-kube-api-access-6jznq\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.175303 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" (UID: "a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.238599 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.280303 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pqht4"] Jan 29 12:10:46 crc kubenswrapper[4840]: I0129 12:10:46.285592 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pqht4"] Jan 29 12:10:47 crc kubenswrapper[4840]: I0129 12:10:47.010243 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" path="/var/lib/kubelet/pods/a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e/volumes" Jan 29 12:10:47 crc kubenswrapper[4840]: I0129 12:10:47.011048 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" path="/var/lib/kubelet/pods/fcb94932-ec4e-48f5-bda1-51aa89a53087/volumes" Jan 29 12:10:50 crc kubenswrapper[4840]: I0129 12:10:50.385609 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 29 12:10:53 crc kubenswrapper[4840]: I0129 12:10:53.522739 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:10:53 crc kubenswrapper[4840]: I0129 12:10:53.522829 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:10:57 crc kubenswrapper[4840]: I0129 12:10:57.997666 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.322575 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c2k7s"] Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.323866 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c2k7s" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" containerName="registry-server" containerID="cri-o://e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59" gracePeriod=30 Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.336046 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-486ss"] Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.336353 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-486ss" podUID="3f137467-8040-45d5-bfa1-89860498eb85" containerName="registry-server" containerID="cri-o://a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8" gracePeriod=30 Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.355126 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cwqw5"] Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.355355 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" containerID="cri-o://107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0" gracePeriod=30 Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.371538 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6w65"] Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.371866 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-r6w65" podUID="12410648-0772-40f7-9261-107634802711" containerName="registry-server" containerID="cri-o://8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680" gracePeriod=30 Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.387098 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jj2gn"] Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.387600 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jj2gn" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerName="registry-server" containerID="cri-o://64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907" gracePeriod=30 Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.393742 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6ssn6"] Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.394081 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" containerName="extract-content" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394100 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" containerName="extract-content" Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.394114 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394120 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.394129 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" containerName="extract-utilities" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394136 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" containerName="extract-utilities" Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.394145 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" containerName="extract-utilities" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394153 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" containerName="extract-utilities" Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.394169 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" containerName="extract-content" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394176 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" containerName="extract-content" Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.394187 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" containerName="extract-utilities" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394193 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" containerName="extract-utilities" Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.394202 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394208 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.394217 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394223 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.394233 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" containerName="extract-utilities" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394240 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" containerName="extract-utilities" Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.394254 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394260 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.394270 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" containerName="extract-content" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394277 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" containerName="extract-content" Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.394284 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" containerName="extract-content" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394290 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" containerName="extract-content" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394398 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="93c0bb8f-812d-4bf4-93c4-7ccd557d326d" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394412 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcb94932-ec4e-48f5-bda1-51aa89a53087" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394424 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0b191e2-8b9a-4af6-9c9a-c77ec92c4d1e" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.394432 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfbbce61-92b7-40f9-ac7f-96b2ad2d43b4" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.395182 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.400753 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6ssn6"] Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.504466 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72c7360b-b452-432c-a48c-319d98003756-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6ssn6\" (UID: \"72c7360b-b452-432c-a48c-319d98003756\") " pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.504676 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljddf\" (UniqueName: \"kubernetes.io/projected/72c7360b-b452-432c-a48c-319d98003756-kube-api-access-ljddf\") pod \"marketplace-operator-79b997595-6ssn6\" (UID: \"72c7360b-b452-432c-a48c-319d98003756\") " pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.504752 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72c7360b-b452-432c-a48c-319d98003756-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6ssn6\" (UID: \"72c7360b-b452-432c-a48c-319d98003756\") " pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.605885 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljddf\" (UniqueName: \"kubernetes.io/projected/72c7360b-b452-432c-a48c-319d98003756-kube-api-access-ljddf\") pod \"marketplace-operator-79b997595-6ssn6\" (UID: \"72c7360b-b452-432c-a48c-319d98003756\") " pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.605981 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72c7360b-b452-432c-a48c-319d98003756-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6ssn6\" (UID: \"72c7360b-b452-432c-a48c-319d98003756\") " pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.606059 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72c7360b-b452-432c-a48c-319d98003756-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6ssn6\" (UID: \"72c7360b-b452-432c-a48c-319d98003756\") " pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.608035 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/72c7360b-b452-432c-a48c-319d98003756-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-6ssn6\" (UID: \"72c7360b-b452-432c-a48c-319d98003756\") " pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.623976 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/72c7360b-b452-432c-a48c-319d98003756-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-6ssn6\" (UID: \"72c7360b-b452-432c-a48c-319d98003756\") " pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.627785 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljddf\" (UniqueName: \"kubernetes.io/projected/72c7360b-b452-432c-a48c-319d98003756-kube-api-access-ljddf\") pod \"marketplace-operator-79b997595-6ssn6\" (UID: \"72c7360b-b452-432c-a48c-319d98003756\") " pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.794257 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.817153 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.827657 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907 is running failed: container process not found" containerID="64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.833894 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907 is running failed: container process not found" containerID="64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.834497 4840 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907 is running failed: container process not found" containerID="64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907" cmd=["grpc_health_probe","-addr=:50051"] Jan 29 12:11:22 crc kubenswrapper[4840]: E0129 12:11:22.834538 4840 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-jj2gn" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerName="registry-server" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.909820 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnb5p\" (UniqueName: \"kubernetes.io/projected/f8720503-5456-4934-a5a9-d58d6eeeb0a6-kube-api-access-wnb5p\") pod \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\" (UID: \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\") " Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.909895 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8720503-5456-4934-a5a9-d58d6eeeb0a6-catalog-content\") pod \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\" (UID: \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\") " Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.909922 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8720503-5456-4934-a5a9-d58d6eeeb0a6-utilities\") pod \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\" (UID: \"f8720503-5456-4934-a5a9-d58d6eeeb0a6\") " Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.911779 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8720503-5456-4934-a5a9-d58d6eeeb0a6-utilities" (OuterVolumeSpecName: "utilities") pod "f8720503-5456-4934-a5a9-d58d6eeeb0a6" (UID: "f8720503-5456-4934-a5a9-d58d6eeeb0a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.920052 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8720503-5456-4934-a5a9-d58d6eeeb0a6-kube-api-access-wnb5p" (OuterVolumeSpecName: "kube-api-access-wnb5p") pod "f8720503-5456-4934-a5a9-d58d6eeeb0a6" (UID: "f8720503-5456-4934-a5a9-d58d6eeeb0a6"). InnerVolumeSpecName "kube-api-access-wnb5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.935809 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.952290 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cwqw5_660d278f-da23-4033-82de-88c42ef375ed/marketplace-operator/2.log" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.952881 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.979093 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:11:22 crc kubenswrapper[4840]: I0129 12:11:22.985856 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8720503-5456-4934-a5a9-d58d6eeeb0a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f8720503-5456-4934-a5a9-d58d6eeeb0a6" (UID: "f8720503-5456-4934-a5a9-d58d6eeeb0a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.000873 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-486ss" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.012250 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvfk6\" (UniqueName: \"kubernetes.io/projected/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-kube-api-access-fvfk6\") pod \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\" (UID: \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\") " Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.012339 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-utilities\") pod \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\" (UID: \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\") " Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.012380 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmtz8\" (UniqueName: \"kubernetes.io/projected/660d278f-da23-4033-82de-88c42ef375ed-kube-api-access-cmtz8\") pod \"660d278f-da23-4033-82de-88c42ef375ed\" (UID: \"660d278f-da23-4033-82de-88c42ef375ed\") " Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.012403 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/660d278f-da23-4033-82de-88c42ef375ed-marketplace-operator-metrics\") pod \"660d278f-da23-4033-82de-88c42ef375ed\" (UID: \"660d278f-da23-4033-82de-88c42ef375ed\") " Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.012457 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/660d278f-da23-4033-82de-88c42ef375ed-marketplace-trusted-ca\") pod \"660d278f-da23-4033-82de-88c42ef375ed\" (UID: \"660d278f-da23-4033-82de-88c42ef375ed\") " Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.012505 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-catalog-content\") pod \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\" (UID: \"dd2c47ef-59b6-409e-8091-2e6f20eb89cb\") " Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.012794 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnb5p\" (UniqueName: \"kubernetes.io/projected/f8720503-5456-4934-a5a9-d58d6eeeb0a6-kube-api-access-wnb5p\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.012809 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f8720503-5456-4934-a5a9-d58d6eeeb0a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.012819 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f8720503-5456-4934-a5a9-d58d6eeeb0a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.015607 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-utilities" (OuterVolumeSpecName: "utilities") pod "dd2c47ef-59b6-409e-8091-2e6f20eb89cb" (UID: "dd2c47ef-59b6-409e-8091-2e6f20eb89cb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.015662 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/660d278f-da23-4033-82de-88c42ef375ed-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "660d278f-da23-4033-82de-88c42ef375ed" (UID: "660d278f-da23-4033-82de-88c42ef375ed"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.021258 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/660d278f-da23-4033-82de-88c42ef375ed-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "660d278f-da23-4033-82de-88c42ef375ed" (UID: "660d278f-da23-4033-82de-88c42ef375ed"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.024810 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/660d278f-da23-4033-82de-88c42ef375ed-kube-api-access-cmtz8" (OuterVolumeSpecName: "kube-api-access-cmtz8") pod "660d278f-da23-4033-82de-88c42ef375ed" (UID: "660d278f-da23-4033-82de-88c42ef375ed"). InnerVolumeSpecName "kube-api-access-cmtz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.039669 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-kube-api-access-fvfk6" (OuterVolumeSpecName: "kube-api-access-fvfk6") pod "dd2c47ef-59b6-409e-8091-2e6f20eb89cb" (UID: "dd2c47ef-59b6-409e-8091-2e6f20eb89cb"). InnerVolumeSpecName "kube-api-access-fvfk6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.113925 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12410648-0772-40f7-9261-107634802711-catalog-content\") pod \"12410648-0772-40f7-9261-107634802711\" (UID: \"12410648-0772-40f7-9261-107634802711\") " Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.114064 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cdj8k\" (UniqueName: \"kubernetes.io/projected/3f137467-8040-45d5-bfa1-89860498eb85-kube-api-access-cdj8k\") pod \"3f137467-8040-45d5-bfa1-89860498eb85\" (UID: \"3f137467-8040-45d5-bfa1-89860498eb85\") " Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.114105 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rqz7\" (UniqueName: \"kubernetes.io/projected/12410648-0772-40f7-9261-107634802711-kube-api-access-6rqz7\") pod \"12410648-0772-40f7-9261-107634802711\" (UID: \"12410648-0772-40f7-9261-107634802711\") " Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.114138 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12410648-0772-40f7-9261-107634802711-utilities\") pod \"12410648-0772-40f7-9261-107634802711\" (UID: \"12410648-0772-40f7-9261-107634802711\") " Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.114199 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f137467-8040-45d5-bfa1-89860498eb85-catalog-content\") pod \"3f137467-8040-45d5-bfa1-89860498eb85\" (UID: \"3f137467-8040-45d5-bfa1-89860498eb85\") " Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.114253 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f137467-8040-45d5-bfa1-89860498eb85-utilities\") pod \"3f137467-8040-45d5-bfa1-89860498eb85\" (UID: \"3f137467-8040-45d5-bfa1-89860498eb85\") " Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.114500 4840 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/660d278f-da23-4033-82de-88c42ef375ed-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.114519 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvfk6\" (UniqueName: \"kubernetes.io/projected/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-kube-api-access-fvfk6\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.114530 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.114539 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cmtz8\" (UniqueName: \"kubernetes.io/projected/660d278f-da23-4033-82de-88c42ef375ed-kube-api-access-cmtz8\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.114548 4840 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/660d278f-da23-4033-82de-88c42ef375ed-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.115360 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f137467-8040-45d5-bfa1-89860498eb85-utilities" (OuterVolumeSpecName: "utilities") pod "3f137467-8040-45d5-bfa1-89860498eb85" (UID: "3f137467-8040-45d5-bfa1-89860498eb85"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.115912 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12410648-0772-40f7-9261-107634802711-utilities" (OuterVolumeSpecName: "utilities") pod "12410648-0772-40f7-9261-107634802711" (UID: "12410648-0772-40f7-9261-107634802711"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.121233 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f137467-8040-45d5-bfa1-89860498eb85-kube-api-access-cdj8k" (OuterVolumeSpecName: "kube-api-access-cdj8k") pod "3f137467-8040-45d5-bfa1-89860498eb85" (UID: "3f137467-8040-45d5-bfa1-89860498eb85"). InnerVolumeSpecName "kube-api-access-cdj8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.126127 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12410648-0772-40f7-9261-107634802711-kube-api-access-6rqz7" (OuterVolumeSpecName: "kube-api-access-6rqz7") pod "12410648-0772-40f7-9261-107634802711" (UID: "12410648-0772-40f7-9261-107634802711"). InnerVolumeSpecName "kube-api-access-6rqz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.136018 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12410648-0772-40f7-9261-107634802711-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12410648-0772-40f7-9261-107634802711" (UID: "12410648-0772-40f7-9261-107634802711"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.181941 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd2c47ef-59b6-409e-8091-2e6f20eb89cb" (UID: "dd2c47ef-59b6-409e-8091-2e6f20eb89cb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.183790 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f137467-8040-45d5-bfa1-89860498eb85-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3f137467-8040-45d5-bfa1-89860498eb85" (UID: "3f137467-8040-45d5-bfa1-89860498eb85"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.185030 4840 generic.go:334] "Generic (PLEG): container finished" podID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerID="64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907" exitCode=0 Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.185147 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj2gn" event={"ID":"dd2c47ef-59b6-409e-8091-2e6f20eb89cb","Type":"ContainerDied","Data":"64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907"} Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.185184 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jj2gn" event={"ID":"dd2c47ef-59b6-409e-8091-2e6f20eb89cb","Type":"ContainerDied","Data":"22af931aee8fdc8f8a3ec880f6f42bdc51eb9f0feed250a71ec464ac771462bb"} Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.185178 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jj2gn" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.185205 4840 scope.go:117] "RemoveContainer" containerID="64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.189013 4840 generic.go:334] "Generic (PLEG): container finished" podID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" containerID="e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59" exitCode=0 Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.189092 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2k7s" event={"ID":"f8720503-5456-4934-a5a9-d58d6eeeb0a6","Type":"ContainerDied","Data":"e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59"} Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.189121 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c2k7s" event={"ID":"f8720503-5456-4934-a5a9-d58d6eeeb0a6","Type":"ContainerDied","Data":"15ddf5699af3a0577583e7c62665cbc0fbd14f0ceab114f1e90197ce087b361d"} Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.189238 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c2k7s" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.193257 4840 generic.go:334] "Generic (PLEG): container finished" podID="3f137467-8040-45d5-bfa1-89860498eb85" containerID="a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8" exitCode=0 Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.193366 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-486ss" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.193330 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486ss" event={"ID":"3f137467-8040-45d5-bfa1-89860498eb85","Type":"ContainerDied","Data":"a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8"} Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.193521 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-486ss" event={"ID":"3f137467-8040-45d5-bfa1-89860498eb85","Type":"ContainerDied","Data":"3d9c9c71f0533d4fc6dc4a026a8d28ca635f93cf1b650f01218293cb56a52802"} Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.200383 4840 generic.go:334] "Generic (PLEG): container finished" podID="12410648-0772-40f7-9261-107634802711" containerID="8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680" exitCode=0 Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.200495 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6w65" event={"ID":"12410648-0772-40f7-9261-107634802711","Type":"ContainerDied","Data":"8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680"} Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.200536 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-r6w65" event={"ID":"12410648-0772-40f7-9261-107634802711","Type":"ContainerDied","Data":"300fdfc67c07d9980861b9c7777a9831b8c0da119394577be27e1f3086ba5790"} Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.200643 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-r6w65" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.203755 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-cwqw5_660d278f-da23-4033-82de-88c42ef375ed/marketplace-operator/2.log" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.203812 4840 generic.go:334] "Generic (PLEG): container finished" podID="660d278f-da23-4033-82de-88c42ef375ed" containerID="107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0" exitCode=0 Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.203846 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" event={"ID":"660d278f-da23-4033-82de-88c42ef375ed","Type":"ContainerDied","Data":"107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0"} Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.203875 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" event={"ID":"660d278f-da23-4033-82de-88c42ef375ed","Type":"ContainerDied","Data":"1a8356ab5e3203f2949834ac88875b11449bbba42ce099a53317c776fc04258b"} Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.203974 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-cwqw5" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.216765 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3f137467-8040-45d5-bfa1-89860498eb85-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.216831 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd2c47ef-59b6-409e-8091-2e6f20eb89cb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.216849 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3f137467-8040-45d5-bfa1-89860498eb85-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.216862 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12410648-0772-40f7-9261-107634802711-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.216899 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cdj8k\" (UniqueName: \"kubernetes.io/projected/3f137467-8040-45d5-bfa1-89860498eb85-kube-api-access-cdj8k\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.216917 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rqz7\" (UniqueName: \"kubernetes.io/projected/12410648-0772-40f7-9261-107634802711-kube-api-access-6rqz7\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.216929 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12410648-0772-40f7-9261-107634802711-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.218077 4840 scope.go:117] "RemoveContainer" containerID="a7e5acb081ba59a7630dd4c3ec7c15eb13f3886b1a3ef85bfe691b9b296c5494" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.225443 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c2k7s"] Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.239625 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c2k7s"] Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.250824 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jj2gn"] Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.264910 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jj2gn"] Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.266598 4840 scope.go:117] "RemoveContainer" containerID="d58a8f2c253b6fd6b5ffafee7b303f1ac1068c4c1cda80bef1ce2432acf30330" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.274632 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6w65"] Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.280126 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-r6w65"] Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.287067 4840 scope.go:117] "RemoveContainer" containerID="64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.288787 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cwqw5"] Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.290836 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907\": container with ID starting with 64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907 not found: ID does not exist" containerID="64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.290884 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907"} err="failed to get container status \"64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907\": rpc error: code = NotFound desc = could not find container \"64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907\": container with ID starting with 64c9b13aca9c7eb14c84244c5be7162a6780b7e6ce8cfae6d38f76ecdbcc5907 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.290923 4840 scope.go:117] "RemoveContainer" containerID="a7e5acb081ba59a7630dd4c3ec7c15eb13f3886b1a3ef85bfe691b9b296c5494" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.291483 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7e5acb081ba59a7630dd4c3ec7c15eb13f3886b1a3ef85bfe691b9b296c5494\": container with ID starting with a7e5acb081ba59a7630dd4c3ec7c15eb13f3886b1a3ef85bfe691b9b296c5494 not found: ID does not exist" containerID="a7e5acb081ba59a7630dd4c3ec7c15eb13f3886b1a3ef85bfe691b9b296c5494" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.291515 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7e5acb081ba59a7630dd4c3ec7c15eb13f3886b1a3ef85bfe691b9b296c5494"} err="failed to get container status \"a7e5acb081ba59a7630dd4c3ec7c15eb13f3886b1a3ef85bfe691b9b296c5494\": rpc error: code = NotFound desc = could not find container \"a7e5acb081ba59a7630dd4c3ec7c15eb13f3886b1a3ef85bfe691b9b296c5494\": container with ID starting with a7e5acb081ba59a7630dd4c3ec7c15eb13f3886b1a3ef85bfe691b9b296c5494 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.291536 4840 scope.go:117] "RemoveContainer" containerID="d58a8f2c253b6fd6b5ffafee7b303f1ac1068c4c1cda80bef1ce2432acf30330" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.292123 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d58a8f2c253b6fd6b5ffafee7b303f1ac1068c4c1cda80bef1ce2432acf30330\": container with ID starting with d58a8f2c253b6fd6b5ffafee7b303f1ac1068c4c1cda80bef1ce2432acf30330 not found: ID does not exist" containerID="d58a8f2c253b6fd6b5ffafee7b303f1ac1068c4c1cda80bef1ce2432acf30330" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.292190 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d58a8f2c253b6fd6b5ffafee7b303f1ac1068c4c1cda80bef1ce2432acf30330"} err="failed to get container status \"d58a8f2c253b6fd6b5ffafee7b303f1ac1068c4c1cda80bef1ce2432acf30330\": rpc error: code = NotFound desc = could not find container \"d58a8f2c253b6fd6b5ffafee7b303f1ac1068c4c1cda80bef1ce2432acf30330\": container with ID starting with d58a8f2c253b6fd6b5ffafee7b303f1ac1068c4c1cda80bef1ce2432acf30330 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.292236 4840 scope.go:117] "RemoveContainer" containerID="e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.294277 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-cwqw5"] Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.298085 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-486ss"] Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.301825 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-486ss"] Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.307135 4840 scope.go:117] "RemoveContainer" containerID="5a19843e458896efa2a0f73449f30f54f472cda8c520ddf6a459fb6e4299d134" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.320832 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-6ssn6"] Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.337568 4840 scope.go:117] "RemoveContainer" containerID="ec8c004f1462cb40bdd2a8a69cd2941545feef96c6680123a23738ba01506491" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.360743 4840 scope.go:117] "RemoveContainer" containerID="e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.361802 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59\": container with ID starting with e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59 not found: ID does not exist" containerID="e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.361851 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59"} err="failed to get container status \"e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59\": rpc error: code = NotFound desc = could not find container \"e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59\": container with ID starting with e0221e6235d76b986822d75a82542a9ce0a17e27b7d3e50a32965d514da40c59 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.361890 4840 scope.go:117] "RemoveContainer" containerID="5a19843e458896efa2a0f73449f30f54f472cda8c520ddf6a459fb6e4299d134" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.363765 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a19843e458896efa2a0f73449f30f54f472cda8c520ddf6a459fb6e4299d134\": container with ID starting with 5a19843e458896efa2a0f73449f30f54f472cda8c520ddf6a459fb6e4299d134 not found: ID does not exist" containerID="5a19843e458896efa2a0f73449f30f54f472cda8c520ddf6a459fb6e4299d134" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.363805 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a19843e458896efa2a0f73449f30f54f472cda8c520ddf6a459fb6e4299d134"} err="failed to get container status \"5a19843e458896efa2a0f73449f30f54f472cda8c520ddf6a459fb6e4299d134\": rpc error: code = NotFound desc = could not find container \"5a19843e458896efa2a0f73449f30f54f472cda8c520ddf6a459fb6e4299d134\": container with ID starting with 5a19843e458896efa2a0f73449f30f54f472cda8c520ddf6a459fb6e4299d134 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.363834 4840 scope.go:117] "RemoveContainer" containerID="ec8c004f1462cb40bdd2a8a69cd2941545feef96c6680123a23738ba01506491" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.364565 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec8c004f1462cb40bdd2a8a69cd2941545feef96c6680123a23738ba01506491\": container with ID starting with ec8c004f1462cb40bdd2a8a69cd2941545feef96c6680123a23738ba01506491 not found: ID does not exist" containerID="ec8c004f1462cb40bdd2a8a69cd2941545feef96c6680123a23738ba01506491" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.364590 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec8c004f1462cb40bdd2a8a69cd2941545feef96c6680123a23738ba01506491"} err="failed to get container status \"ec8c004f1462cb40bdd2a8a69cd2941545feef96c6680123a23738ba01506491\": rpc error: code = NotFound desc = could not find container \"ec8c004f1462cb40bdd2a8a69cd2941545feef96c6680123a23738ba01506491\": container with ID starting with ec8c004f1462cb40bdd2a8a69cd2941545feef96c6680123a23738ba01506491 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.364607 4840 scope.go:117] "RemoveContainer" containerID="a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.384533 4840 scope.go:117] "RemoveContainer" containerID="f492370aaa3bf184513076529e03dd7f0b1ae2e016c7888abb731e53cc690809" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.404412 4840 scope.go:117] "RemoveContainer" containerID="1ebdf999fb412d99b0a3a2306bce2995081530907f24d8b236d3ff12d65cab03" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.424589 4840 scope.go:117] "RemoveContainer" containerID="a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.425312 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8\": container with ID starting with a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8 not found: ID does not exist" containerID="a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.425392 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8"} err="failed to get container status \"a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8\": rpc error: code = NotFound desc = could not find container \"a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8\": container with ID starting with a950f7dc16aafbf9b5e8c3681e69f4415dc0b032eb01554f7621b384821909f8 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.425633 4840 scope.go:117] "RemoveContainer" containerID="f492370aaa3bf184513076529e03dd7f0b1ae2e016c7888abb731e53cc690809" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.426188 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f492370aaa3bf184513076529e03dd7f0b1ae2e016c7888abb731e53cc690809\": container with ID starting with f492370aaa3bf184513076529e03dd7f0b1ae2e016c7888abb731e53cc690809 not found: ID does not exist" containerID="f492370aaa3bf184513076529e03dd7f0b1ae2e016c7888abb731e53cc690809" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.426219 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f492370aaa3bf184513076529e03dd7f0b1ae2e016c7888abb731e53cc690809"} err="failed to get container status \"f492370aaa3bf184513076529e03dd7f0b1ae2e016c7888abb731e53cc690809\": rpc error: code = NotFound desc = could not find container \"f492370aaa3bf184513076529e03dd7f0b1ae2e016c7888abb731e53cc690809\": container with ID starting with f492370aaa3bf184513076529e03dd7f0b1ae2e016c7888abb731e53cc690809 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.426237 4840 scope.go:117] "RemoveContainer" containerID="1ebdf999fb412d99b0a3a2306bce2995081530907f24d8b236d3ff12d65cab03" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.426879 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ebdf999fb412d99b0a3a2306bce2995081530907f24d8b236d3ff12d65cab03\": container with ID starting with 1ebdf999fb412d99b0a3a2306bce2995081530907f24d8b236d3ff12d65cab03 not found: ID does not exist" containerID="1ebdf999fb412d99b0a3a2306bce2995081530907f24d8b236d3ff12d65cab03" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.426907 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ebdf999fb412d99b0a3a2306bce2995081530907f24d8b236d3ff12d65cab03"} err="failed to get container status \"1ebdf999fb412d99b0a3a2306bce2995081530907f24d8b236d3ff12d65cab03\": rpc error: code = NotFound desc = could not find container \"1ebdf999fb412d99b0a3a2306bce2995081530907f24d8b236d3ff12d65cab03\": container with ID starting with 1ebdf999fb412d99b0a3a2306bce2995081530907f24d8b236d3ff12d65cab03 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.426923 4840 scope.go:117] "RemoveContainer" containerID="8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.454710 4840 scope.go:117] "RemoveContainer" containerID="273f1a79cacb530bed367a770dbddabdfd026eada616e15af085e7a144cc4bf3" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.473226 4840 scope.go:117] "RemoveContainer" containerID="855d4beb25910e0f2adac654bce3abb559531acf76e239a55d0e82d7582ab7c7" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.523064 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.523153 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.523219 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.523807 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ff863e72d8fdd89fd574ba3941e895f5b60a6ec9ccb97dd5b92e67bce246615"} pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.523892 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" containerID="cri-o://5ff863e72d8fdd89fd574ba3941e895f5b60a6ec9ccb97dd5b92e67bce246615" gracePeriod=600 Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.536484 4840 scope.go:117] "RemoveContainer" containerID="8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.537037 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680\": container with ID starting with 8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680 not found: ID does not exist" containerID="8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.537066 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680"} err="failed to get container status \"8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680\": rpc error: code = NotFound desc = could not find container \"8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680\": container with ID starting with 8570655e1bdda6dc5dd4dde7bdb94d5b51177ad7e48d7523d02f792b65640680 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.537088 4840 scope.go:117] "RemoveContainer" containerID="273f1a79cacb530bed367a770dbddabdfd026eada616e15af085e7a144cc4bf3" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.537578 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"273f1a79cacb530bed367a770dbddabdfd026eada616e15af085e7a144cc4bf3\": container with ID starting with 273f1a79cacb530bed367a770dbddabdfd026eada616e15af085e7a144cc4bf3 not found: ID does not exist" containerID="273f1a79cacb530bed367a770dbddabdfd026eada616e15af085e7a144cc4bf3" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.537603 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"273f1a79cacb530bed367a770dbddabdfd026eada616e15af085e7a144cc4bf3"} err="failed to get container status \"273f1a79cacb530bed367a770dbddabdfd026eada616e15af085e7a144cc4bf3\": rpc error: code = NotFound desc = could not find container \"273f1a79cacb530bed367a770dbddabdfd026eada616e15af085e7a144cc4bf3\": container with ID starting with 273f1a79cacb530bed367a770dbddabdfd026eada616e15af085e7a144cc4bf3 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.537628 4840 scope.go:117] "RemoveContainer" containerID="855d4beb25910e0f2adac654bce3abb559531acf76e239a55d0e82d7582ab7c7" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.537886 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"855d4beb25910e0f2adac654bce3abb559531acf76e239a55d0e82d7582ab7c7\": container with ID starting with 855d4beb25910e0f2adac654bce3abb559531acf76e239a55d0e82d7582ab7c7 not found: ID does not exist" containerID="855d4beb25910e0f2adac654bce3abb559531acf76e239a55d0e82d7582ab7c7" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.537909 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"855d4beb25910e0f2adac654bce3abb559531acf76e239a55d0e82d7582ab7c7"} err="failed to get container status \"855d4beb25910e0f2adac654bce3abb559531acf76e239a55d0e82d7582ab7c7\": rpc error: code = NotFound desc = could not find container \"855d4beb25910e0f2adac654bce3abb559531acf76e239a55d0e82d7582ab7c7\": container with ID starting with 855d4beb25910e0f2adac654bce3abb559531acf76e239a55d0e82d7582ab7c7 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.537924 4840 scope.go:117] "RemoveContainer" containerID="107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.567389 4840 scope.go:117] "RemoveContainer" containerID="c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.601466 4840 scope.go:117] "RemoveContainer" containerID="107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.605118 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0\": container with ID starting with 107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0 not found: ID does not exist" containerID="107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.605166 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0"} err="failed to get container status \"107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0\": rpc error: code = NotFound desc = could not find container \"107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0\": container with ID starting with 107a2d5846138c342f7c5fc8030079013a2a876e23954049a94cfd05d5cc91c0 not found: ID does not exist" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.605209 4840 scope.go:117] "RemoveContainer" containerID="c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6" Jan 29 12:11:23 crc kubenswrapper[4840]: E0129 12:11:23.608703 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6\": container with ID starting with c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6 not found: ID does not exist" containerID="c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6" Jan 29 12:11:23 crc kubenswrapper[4840]: I0129 12:11:23.608749 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6"} err="failed to get container status \"c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6\": rpc error: code = NotFound desc = could not find container \"c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6\": container with ID starting with c7f952b8c3cf784aef3a5493acabbd256bfb684005aba692753a7c1104453bb6 not found: ID does not exist" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.212455 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" event={"ID":"72c7360b-b452-432c-a48c-319d98003756","Type":"ContainerStarted","Data":"5fe5aa89b0f15ec6ea366abc1d42ea5d3dc44d691b595cfa33b2dd897631fb92"} Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.212500 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" event={"ID":"72c7360b-b452-432c-a48c-319d98003756","Type":"ContainerStarted","Data":"8efdfcb0c69d404adcb591cda6c6938fe7a2f523a80effa090a9c59ce783958c"} Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.213799 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.219169 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.241739 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-6ssn6" podStartSLOduration=2.241718917 podStartE2EDuration="2.241718917s" podCreationTimestamp="2026-01-29 12:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:11:24.238922212 +0000 UTC m=+415.901902095" watchObservedRunningTime="2026-01-29 12:11:24.241718917 +0000 UTC m=+415.904698810" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.251966 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerID="5ff863e72d8fdd89fd574ba3941e895f5b60a6ec9ccb97dd5b92e67bce246615" exitCode=0 Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.252396 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerDied","Data":"5ff863e72d8fdd89fd574ba3941e895f5b60a6ec9ccb97dd5b92e67bce246615"} Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.252435 4840 scope.go:117] "RemoveContainer" containerID="009822b4e367b85e0f177853a620265f5c163a13cf6edd19acb10964bea38d2b" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561304 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dlf42"] Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561555 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f137467-8040-45d5-bfa1-89860498eb85" containerName="registry-server" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561570 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f137467-8040-45d5-bfa1-89860498eb85" containerName="registry-server" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561578 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12410648-0772-40f7-9261-107634802711" containerName="registry-server" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561584 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="12410648-0772-40f7-9261-107634802711" containerName="registry-server" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561593 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" containerName="registry-server" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561600 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" containerName="registry-server" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561610 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" containerName="extract-utilities" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561619 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" containerName="extract-utilities" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561631 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f137467-8040-45d5-bfa1-89860498eb85" containerName="extract-utilities" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561638 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f137467-8040-45d5-bfa1-89860498eb85" containerName="extract-utilities" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561648 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f137467-8040-45d5-bfa1-89860498eb85" containerName="extract-content" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561654 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f137467-8040-45d5-bfa1-89860498eb85" containerName="extract-content" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561664 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerName="extract-utilities" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561670 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerName="extract-utilities" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561676 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" containerName="extract-content" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561681 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" containerName="extract-content" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561689 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12410648-0772-40f7-9261-107634802711" containerName="extract-content" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561694 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="12410648-0772-40f7-9261-107634802711" containerName="extract-content" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561704 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561709 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561717 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12410648-0772-40f7-9261-107634802711" containerName="extract-utilities" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561723 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="12410648-0772-40f7-9261-107634802711" containerName="extract-utilities" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561730 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561735 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561742 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerName="registry-server" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561749 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerName="registry-server" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561756 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerName="extract-content" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561762 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerName="extract-content" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561769 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561775 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" Jan 29 12:11:24 crc kubenswrapper[4840]: E0129 12:11:24.561787 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561793 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561887 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561896 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561904 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" containerName="registry-server" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561913 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f137467-8040-45d5-bfa1-89860498eb85" containerName="registry-server" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561920 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="12410648-0772-40f7-9261-107634802711" containerName="registry-server" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.561926 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" containerName="registry-server" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.562116 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.562127 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="660d278f-da23-4033-82de-88c42ef375ed" containerName="marketplace-operator" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.562765 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.565689 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.575432 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dlf42"] Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.665256 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eca2916-9f95-4cf4-b35f-7aebe5f09e19-utilities\") pod \"certified-operators-dlf42\" (UID: \"5eca2916-9f95-4cf4-b35f-7aebe5f09e19\") " pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.665298 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eca2916-9f95-4cf4-b35f-7aebe5f09e19-catalog-content\") pod \"certified-operators-dlf42\" (UID: \"5eca2916-9f95-4cf4-b35f-7aebe5f09e19\") " pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.665320 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d24zx\" (UniqueName: \"kubernetes.io/projected/5eca2916-9f95-4cf4-b35f-7aebe5f09e19-kube-api-access-d24zx\") pod \"certified-operators-dlf42\" (UID: \"5eca2916-9f95-4cf4-b35f-7aebe5f09e19\") " pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.740915 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cm45d"] Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.743861 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.758837 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.767401 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eca2916-9f95-4cf4-b35f-7aebe5f09e19-utilities\") pod \"certified-operators-dlf42\" (UID: \"5eca2916-9f95-4cf4-b35f-7aebe5f09e19\") " pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.767469 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eca2916-9f95-4cf4-b35f-7aebe5f09e19-catalog-content\") pod \"certified-operators-dlf42\" (UID: \"5eca2916-9f95-4cf4-b35f-7aebe5f09e19\") " pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.767502 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d24zx\" (UniqueName: \"kubernetes.io/projected/5eca2916-9f95-4cf4-b35f-7aebe5f09e19-kube-api-access-d24zx\") pod \"certified-operators-dlf42\" (UID: \"5eca2916-9f95-4cf4-b35f-7aebe5f09e19\") " pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.768310 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5eca2916-9f95-4cf4-b35f-7aebe5f09e19-catalog-content\") pod \"certified-operators-dlf42\" (UID: \"5eca2916-9f95-4cf4-b35f-7aebe5f09e19\") " pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.768779 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5eca2916-9f95-4cf4-b35f-7aebe5f09e19-utilities\") pod \"certified-operators-dlf42\" (UID: \"5eca2916-9f95-4cf4-b35f-7aebe5f09e19\") " pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.770996 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cm45d"] Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.790504 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d24zx\" (UniqueName: \"kubernetes.io/projected/5eca2916-9f95-4cf4-b35f-7aebe5f09e19-kube-api-access-d24zx\") pod \"certified-operators-dlf42\" (UID: \"5eca2916-9f95-4cf4-b35f-7aebe5f09e19\") " pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.870509 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd2d39b4-a2bb-4311-872e-a9591621717f-utilities\") pod \"community-operators-cm45d\" (UID: \"cd2d39b4-a2bb-4311-872e-a9591621717f\") " pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.870593 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk48c\" (UniqueName: \"kubernetes.io/projected/cd2d39b4-a2bb-4311-872e-a9591621717f-kube-api-access-vk48c\") pod \"community-operators-cm45d\" (UID: \"cd2d39b4-a2bb-4311-872e-a9591621717f\") " pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.870685 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd2d39b4-a2bb-4311-872e-a9591621717f-catalog-content\") pod \"community-operators-cm45d\" (UID: \"cd2d39b4-a2bb-4311-872e-a9591621717f\") " pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.883051 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.971505 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd2d39b4-a2bb-4311-872e-a9591621717f-utilities\") pod \"community-operators-cm45d\" (UID: \"cd2d39b4-a2bb-4311-872e-a9591621717f\") " pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.972006 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk48c\" (UniqueName: \"kubernetes.io/projected/cd2d39b4-a2bb-4311-872e-a9591621717f-kube-api-access-vk48c\") pod \"community-operators-cm45d\" (UID: \"cd2d39b4-a2bb-4311-872e-a9591621717f\") " pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.972046 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd2d39b4-a2bb-4311-872e-a9591621717f-catalog-content\") pod \"community-operators-cm45d\" (UID: \"cd2d39b4-a2bb-4311-872e-a9591621717f\") " pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.972925 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd2d39b4-a2bb-4311-872e-a9591621717f-catalog-content\") pod \"community-operators-cm45d\" (UID: \"cd2d39b4-a2bb-4311-872e-a9591621717f\") " pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.972969 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd2d39b4-a2bb-4311-872e-a9591621717f-utilities\") pod \"community-operators-cm45d\" (UID: \"cd2d39b4-a2bb-4311-872e-a9591621717f\") " pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:24 crc kubenswrapper[4840]: I0129 12:11:24.996139 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk48c\" (UniqueName: \"kubernetes.io/projected/cd2d39b4-a2bb-4311-872e-a9591621717f-kube-api-access-vk48c\") pod \"community-operators-cm45d\" (UID: \"cd2d39b4-a2bb-4311-872e-a9591621717f\") " pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:25 crc kubenswrapper[4840]: I0129 12:11:25.018420 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12410648-0772-40f7-9261-107634802711" path="/var/lib/kubelet/pods/12410648-0772-40f7-9261-107634802711/volumes" Jan 29 12:11:25 crc kubenswrapper[4840]: I0129 12:11:25.019212 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f137467-8040-45d5-bfa1-89860498eb85" path="/var/lib/kubelet/pods/3f137467-8040-45d5-bfa1-89860498eb85/volumes" Jan 29 12:11:25 crc kubenswrapper[4840]: I0129 12:11:25.020559 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="660d278f-da23-4033-82de-88c42ef375ed" path="/var/lib/kubelet/pods/660d278f-da23-4033-82de-88c42ef375ed/volumes" Jan 29 12:11:25 crc kubenswrapper[4840]: I0129 12:11:25.022477 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd2c47ef-59b6-409e-8091-2e6f20eb89cb" path="/var/lib/kubelet/pods/dd2c47ef-59b6-409e-8091-2e6f20eb89cb/volumes" Jan 29 12:11:25 crc kubenswrapper[4840]: I0129 12:11:25.023384 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8720503-5456-4934-a5a9-d58d6eeeb0a6" path="/var/lib/kubelet/pods/f8720503-5456-4934-a5a9-d58d6eeeb0a6/volumes" Jan 29 12:11:25 crc kubenswrapper[4840]: I0129 12:11:25.084375 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:25 crc kubenswrapper[4840]: I0129 12:11:25.145700 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dlf42"] Jan 29 12:11:25 crc kubenswrapper[4840]: W0129 12:11:25.153835 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eca2916_9f95_4cf4_b35f_7aebe5f09e19.slice/crio-b1efd4ebb32b240457bf5ef6551619d2f5ea8777693e3937dc8a6ee467ae0ef8 WatchSource:0}: Error finding container b1efd4ebb32b240457bf5ef6551619d2f5ea8777693e3937dc8a6ee467ae0ef8: Status 404 returned error can't find the container with id b1efd4ebb32b240457bf5ef6551619d2f5ea8777693e3937dc8a6ee467ae0ef8 Jan 29 12:11:25 crc kubenswrapper[4840]: I0129 12:11:25.274187 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlf42" event={"ID":"5eca2916-9f95-4cf4-b35f-7aebe5f09e19","Type":"ContainerStarted","Data":"b1efd4ebb32b240457bf5ef6551619d2f5ea8777693e3937dc8a6ee467ae0ef8"} Jan 29 12:11:25 crc kubenswrapper[4840]: I0129 12:11:25.279080 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"e10267b8e4133aafc62b40dd54cb6e7f2b7971650f1dabb0aab935dd2a060868"} Jan 29 12:11:25 crc kubenswrapper[4840]: I0129 12:11:25.515846 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cm45d"] Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.145128 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c8985447-tfqrv"] Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.146072 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" podUID="015cd172-e294-454b-b521-fa058d6bb518" containerName="controller-manager" containerID="cri-o://82300d8b07072ce82dd285ce77cd748475ac7550482b7e4cea6a448a8bae6d1c" gracePeriod=30 Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.166112 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn"] Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.167041 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" podUID="018b4408-551c-4a0d-95cf-d1eaf9eab223" containerName="route-controller-manager" containerID="cri-o://649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba" gracePeriod=30 Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.292332 4840 generic.go:334] "Generic (PLEG): container finished" podID="cd2d39b4-a2bb-4311-872e-a9591621717f" containerID="a3ffbb28735c9e5f987b5af901d0b92cff47cbee90b871525b446ab25e0b9f9e" exitCode=0 Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.292444 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm45d" event={"ID":"cd2d39b4-a2bb-4311-872e-a9591621717f","Type":"ContainerDied","Data":"a3ffbb28735c9e5f987b5af901d0b92cff47cbee90b871525b446ab25e0b9f9e"} Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.292508 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm45d" event={"ID":"cd2d39b4-a2bb-4311-872e-a9591621717f","Type":"ContainerStarted","Data":"aa12c3c0126ce4a371ea960b9f0ef239c622d610d3faffe881ef9c814539bd62"} Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.294630 4840 generic.go:334] "Generic (PLEG): container finished" podID="5eca2916-9f95-4cf4-b35f-7aebe5f09e19" containerID="af3b1c42ad8e17199d72ab6b6fb9b09b9e0a19af2b04f27795493b3cf25d9ebe" exitCode=0 Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.294738 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlf42" event={"ID":"5eca2916-9f95-4cf4-b35f-7aebe5f09e19","Type":"ContainerDied","Data":"af3b1c42ad8e17199d72ab6b6fb9b09b9e0a19af2b04f27795493b3cf25d9ebe"} Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.299154 4840 generic.go:334] "Generic (PLEG): container finished" podID="015cd172-e294-454b-b521-fa058d6bb518" containerID="82300d8b07072ce82dd285ce77cd748475ac7550482b7e4cea6a448a8bae6d1c" exitCode=0 Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.299223 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" event={"ID":"015cd172-e294-454b-b521-fa058d6bb518","Type":"ContainerDied","Data":"82300d8b07072ce82dd285ce77cd748475ac7550482b7e4cea6a448a8bae6d1c"} Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.686102 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.812638 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015cd172-e294-454b-b521-fa058d6bb518-serving-cert\") pod \"015cd172-e294-454b-b521-fa058d6bb518\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.812804 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-client-ca\") pod \"015cd172-e294-454b-b521-fa058d6bb518\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.812869 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-proxy-ca-bundles\") pod \"015cd172-e294-454b-b521-fa058d6bb518\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.812900 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crt76\" (UniqueName: \"kubernetes.io/projected/015cd172-e294-454b-b521-fa058d6bb518-kube-api-access-crt76\") pod \"015cd172-e294-454b-b521-fa058d6bb518\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.812929 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-config\") pod \"015cd172-e294-454b-b521-fa058d6bb518\" (UID: \"015cd172-e294-454b-b521-fa058d6bb518\") " Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.814635 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-client-ca" (OuterVolumeSpecName: "client-ca") pod "015cd172-e294-454b-b521-fa058d6bb518" (UID: "015cd172-e294-454b-b521-fa058d6bb518"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.814683 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-config" (OuterVolumeSpecName: "config") pod "015cd172-e294-454b-b521-fa058d6bb518" (UID: "015cd172-e294-454b-b521-fa058d6bb518"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.814666 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "015cd172-e294-454b-b521-fa058d6bb518" (UID: "015cd172-e294-454b-b521-fa058d6bb518"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.823302 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015cd172-e294-454b-b521-fa058d6bb518-kube-api-access-crt76" (OuterVolumeSpecName: "kube-api-access-crt76") pod "015cd172-e294-454b-b521-fa058d6bb518" (UID: "015cd172-e294-454b-b521-fa058d6bb518"). InnerVolumeSpecName "kube-api-access-crt76". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.823487 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015cd172-e294-454b-b521-fa058d6bb518-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "015cd172-e294-454b-b521-fa058d6bb518" (UID: "015cd172-e294-454b-b521-fa058d6bb518"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.914538 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015cd172-e294-454b-b521-fa058d6bb518-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.914577 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.914585 4840 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.914598 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crt76\" (UniqueName: \"kubernetes.io/projected/015cd172-e294-454b-b521-fa058d6bb518-kube-api-access-crt76\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.914609 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015cd172-e294-454b-b521-fa058d6bb518-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.950917 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-78p7s"] Jan 29 12:11:26 crc kubenswrapper[4840]: E0129 12:11:26.951242 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015cd172-e294-454b-b521-fa058d6bb518" containerName="controller-manager" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.951257 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="015cd172-e294-454b-b521-fa058d6bb518" containerName="controller-manager" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.951393 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="015cd172-e294-454b-b521-fa058d6bb518" containerName="controller-manager" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.961233 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.963600 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78p7s"] Jan 29 12:11:26 crc kubenswrapper[4840]: I0129 12:11:26.966126 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.015866 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j777r\" (UniqueName: \"kubernetes.io/projected/c6495f02-13cd-40e3-85d9-5b63d3691f0a-kube-api-access-j777r\") pod \"redhat-marketplace-78p7s\" (UID: \"c6495f02-13cd-40e3-85d9-5b63d3691f0a\") " pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.015918 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6495f02-13cd-40e3-85d9-5b63d3691f0a-catalog-content\") pod \"redhat-marketplace-78p7s\" (UID: \"c6495f02-13cd-40e3-85d9-5b63d3691f0a\") " pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.015967 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6495f02-13cd-40e3-85d9-5b63d3691f0a-utilities\") pod \"redhat-marketplace-78p7s\" (UID: \"c6495f02-13cd-40e3-85d9-5b63d3691f0a\") " pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.056987 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.117493 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j777r\" (UniqueName: \"kubernetes.io/projected/c6495f02-13cd-40e3-85d9-5b63d3691f0a-kube-api-access-j777r\") pod \"redhat-marketplace-78p7s\" (UID: \"c6495f02-13cd-40e3-85d9-5b63d3691f0a\") " pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.117568 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6495f02-13cd-40e3-85d9-5b63d3691f0a-catalog-content\") pod \"redhat-marketplace-78p7s\" (UID: \"c6495f02-13cd-40e3-85d9-5b63d3691f0a\") " pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.117594 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6495f02-13cd-40e3-85d9-5b63d3691f0a-utilities\") pod \"redhat-marketplace-78p7s\" (UID: \"c6495f02-13cd-40e3-85d9-5b63d3691f0a\") " pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.118305 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6495f02-13cd-40e3-85d9-5b63d3691f0a-utilities\") pod \"redhat-marketplace-78p7s\" (UID: \"c6495f02-13cd-40e3-85d9-5b63d3691f0a\") " pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.119527 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6495f02-13cd-40e3-85d9-5b63d3691f0a-catalog-content\") pod \"redhat-marketplace-78p7s\" (UID: \"c6495f02-13cd-40e3-85d9-5b63d3691f0a\") " pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.144987 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nt7tn"] Jan 29 12:11:27 crc kubenswrapper[4840]: E0129 12:11:27.145392 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018b4408-551c-4a0d-95cf-d1eaf9eab223" containerName="route-controller-manager" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.145419 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="018b4408-551c-4a0d-95cf-d1eaf9eab223" containerName="route-controller-manager" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.145560 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="018b4408-551c-4a0d-95cf-d1eaf9eab223" containerName="route-controller-manager" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.146638 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.150373 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.159928 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j777r\" (UniqueName: \"kubernetes.io/projected/c6495f02-13cd-40e3-85d9-5b63d3691f0a-kube-api-access-j777r\") pod \"redhat-marketplace-78p7s\" (UID: \"c6495f02-13cd-40e3-85d9-5b63d3691f0a\") " pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.172655 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nt7tn"] Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.219997 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018b4408-551c-4a0d-95cf-d1eaf9eab223-client-ca\") pod \"018b4408-551c-4a0d-95cf-d1eaf9eab223\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.220302 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018b4408-551c-4a0d-95cf-d1eaf9eab223-serving-cert\") pod \"018b4408-551c-4a0d-95cf-d1eaf9eab223\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.220393 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018b4408-551c-4a0d-95cf-d1eaf9eab223-config\") pod \"018b4408-551c-4a0d-95cf-d1eaf9eab223\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.220442 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9g6s\" (UniqueName: \"kubernetes.io/projected/018b4408-551c-4a0d-95cf-d1eaf9eab223-kube-api-access-f9g6s\") pod \"018b4408-551c-4a0d-95cf-d1eaf9eab223\" (UID: \"018b4408-551c-4a0d-95cf-d1eaf9eab223\") " Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.220529 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744fc57c-46df-4ef5-b1f4-2dceb4f52a66-catalog-content\") pod \"redhat-operators-nt7tn\" (UID: \"744fc57c-46df-4ef5-b1f4-2dceb4f52a66\") " pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.220571 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744fc57c-46df-4ef5-b1f4-2dceb4f52a66-utilities\") pod \"redhat-operators-nt7tn\" (UID: \"744fc57c-46df-4ef5-b1f4-2dceb4f52a66\") " pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.220610 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82qf8\" (UniqueName: \"kubernetes.io/projected/744fc57c-46df-4ef5-b1f4-2dceb4f52a66-kube-api-access-82qf8\") pod \"redhat-operators-nt7tn\" (UID: \"744fc57c-46df-4ef5-b1f4-2dceb4f52a66\") " pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.220753 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018b4408-551c-4a0d-95cf-d1eaf9eab223-client-ca" (OuterVolumeSpecName: "client-ca") pod "018b4408-551c-4a0d-95cf-d1eaf9eab223" (UID: "018b4408-551c-4a0d-95cf-d1eaf9eab223"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.221198 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018b4408-551c-4a0d-95cf-d1eaf9eab223-config" (OuterVolumeSpecName: "config") pod "018b4408-551c-4a0d-95cf-d1eaf9eab223" (UID: "018b4408-551c-4a0d-95cf-d1eaf9eab223"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.224248 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/018b4408-551c-4a0d-95cf-d1eaf9eab223-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "018b4408-551c-4a0d-95cf-d1eaf9eab223" (UID: "018b4408-551c-4a0d-95cf-d1eaf9eab223"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.224282 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018b4408-551c-4a0d-95cf-d1eaf9eab223-kube-api-access-f9g6s" (OuterVolumeSpecName: "kube-api-access-f9g6s") pod "018b4408-551c-4a0d-95cf-d1eaf9eab223" (UID: "018b4408-551c-4a0d-95cf-d1eaf9eab223"). InnerVolumeSpecName "kube-api-access-f9g6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.285809 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.332994 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.332983 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5c8985447-tfqrv" event={"ID":"015cd172-e294-454b-b521-fa058d6bb518","Type":"ContainerDied","Data":"45e153d636ea4a29251a89f6d00adedafb4a66b36987dd8c2ed51126d6c4686a"} Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.333622 4840 scope.go:117] "RemoveContainer" containerID="82300d8b07072ce82dd285ce77cd748475ac7550482b7e4cea6a448a8bae6d1c" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.334350 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744fc57c-46df-4ef5-b1f4-2dceb4f52a66-catalog-content\") pod \"redhat-operators-nt7tn\" (UID: \"744fc57c-46df-4ef5-b1f4-2dceb4f52a66\") " pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.334599 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744fc57c-46df-4ef5-b1f4-2dceb4f52a66-utilities\") pod \"redhat-operators-nt7tn\" (UID: \"744fc57c-46df-4ef5-b1f4-2dceb4f52a66\") " pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.334781 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82qf8\" (UniqueName: \"kubernetes.io/projected/744fc57c-46df-4ef5-b1f4-2dceb4f52a66-kube-api-access-82qf8\") pod \"redhat-operators-nt7tn\" (UID: \"744fc57c-46df-4ef5-b1f4-2dceb4f52a66\") " pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.335063 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9g6s\" (UniqueName: \"kubernetes.io/projected/018b4408-551c-4a0d-95cf-d1eaf9eab223-kube-api-access-f9g6s\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.335084 4840 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/018b4408-551c-4a0d-95cf-d1eaf9eab223-client-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.335250 4840 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/018b4408-551c-4a0d-95cf-d1eaf9eab223-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.335298 4840 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/018b4408-551c-4a0d-95cf-d1eaf9eab223-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.335337 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/744fc57c-46df-4ef5-b1f4-2dceb4f52a66-catalog-content\") pod \"redhat-operators-nt7tn\" (UID: \"744fc57c-46df-4ef5-b1f4-2dceb4f52a66\") " pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.338194 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/744fc57c-46df-4ef5-b1f4-2dceb4f52a66-utilities\") pod \"redhat-operators-nt7tn\" (UID: \"744fc57c-46df-4ef5-b1f4-2dceb4f52a66\") " pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.343932 4840 generic.go:334] "Generic (PLEG): container finished" podID="018b4408-551c-4a0d-95cf-d1eaf9eab223" containerID="649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba" exitCode=0 Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.344081 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" event={"ID":"018b4408-551c-4a0d-95cf-d1eaf9eab223","Type":"ContainerDied","Data":"649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba"} Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.344319 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" event={"ID":"018b4408-551c-4a0d-95cf-d1eaf9eab223","Type":"ContainerDied","Data":"41d2da430433322ccff3d95d5fb4fdd90a8e6e9256a0acae307b2dee5ca3bd8f"} Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.344497 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.362886 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm45d" event={"ID":"cd2d39b4-a2bb-4311-872e-a9591621717f","Type":"ContainerStarted","Data":"2447f5abf29c7916747c6008e4fb7039537f91da6f3b251b2f23209bb32c80fc"} Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.368301 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlf42" event={"ID":"5eca2916-9f95-4cf4-b35f-7aebe5f09e19","Type":"ContainerStarted","Data":"7c88e2ff9b6e25ae7eb901c978d749a32e70e49bf33d5fb66366b5c788d66636"} Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.368533 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82qf8\" (UniqueName: \"kubernetes.io/projected/744fc57c-46df-4ef5-b1f4-2dceb4f52a66-kube-api-access-82qf8\") pod \"redhat-operators-nt7tn\" (UID: \"744fc57c-46df-4ef5-b1f4-2dceb4f52a66\") " pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.372359 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5c8985447-tfqrv"] Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.377453 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5c8985447-tfqrv"] Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.380153 4840 scope.go:117] "RemoveContainer" containerID="649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.401208 4840 scope.go:117] "RemoveContainer" containerID="649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba" Jan 29 12:11:27 crc kubenswrapper[4840]: E0129 12:11:27.402187 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba\": container with ID starting with 649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba not found: ID does not exist" containerID="649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.402261 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba"} err="failed to get container status \"649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba\": rpc error: code = NotFound desc = could not find container \"649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba\": container with ID starting with 649be25355d6b53bbf2fc061795511e3edf52eedce0e68d0a2bbb2433a25beba not found: ID does not exist" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.427809 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn"] Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.434799 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-785fdfb9bc-bbbbn"] Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.485871 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.583551 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-78p7s"] Jan 29 12:11:27 crc kubenswrapper[4840]: W0129 12:11:27.606160 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6495f02_13cd_40e3_85d9_5b63d3691f0a.slice/crio-1c3759e452994bf3506d845b008d2eb82a1114cb5027395bc9967673975f1c99 WatchSource:0}: Error finding container 1c3759e452994bf3506d845b008d2eb82a1114cb5027395bc9967673975f1c99: Status 404 returned error can't find the container with id 1c3759e452994bf3506d845b008d2eb82a1114cb5027395bc9967673975f1c99 Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.710869 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mm9gz"] Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.711906 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.732034 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mm9gz"] Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.783634 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nt7tn"] Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.841759 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a9b622f-21de-4a4e-984d-7f340daa5528-trusted-ca\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.841835 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2a9b622f-21de-4a4e-984d-7f340daa5528-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.841871 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2a9b622f-21de-4a4e-984d-7f340daa5528-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.841900 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zpks\" (UniqueName: \"kubernetes.io/projected/2a9b622f-21de-4a4e-984d-7f340daa5528-kube-api-access-8zpks\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.842116 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.842266 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2a9b622f-21de-4a4e-984d-7f340daa5528-registry-tls\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.842442 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a9b622f-21de-4a4e-984d-7f340daa5528-bound-sa-token\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.842534 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2a9b622f-21de-4a4e-984d-7f340daa5528-registry-certificates\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.868360 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.944506 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2a9b622f-21de-4a4e-984d-7f340daa5528-registry-tls\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.944585 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a9b622f-21de-4a4e-984d-7f340daa5528-bound-sa-token\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.944621 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2a9b622f-21de-4a4e-984d-7f340daa5528-registry-certificates\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.944662 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a9b622f-21de-4a4e-984d-7f340daa5528-trusted-ca\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.944696 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2a9b622f-21de-4a4e-984d-7f340daa5528-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.944722 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2a9b622f-21de-4a4e-984d-7f340daa5528-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.944746 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zpks\" (UniqueName: \"kubernetes.io/projected/2a9b622f-21de-4a4e-984d-7f340daa5528-kube-api-access-8zpks\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.946057 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2a9b622f-21de-4a4e-984d-7f340daa5528-ca-trust-extracted\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.946928 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2a9b622f-21de-4a4e-984d-7f340daa5528-trusted-ca\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.947172 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2a9b622f-21de-4a4e-984d-7f340daa5528-registry-certificates\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.960259 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2a9b622f-21de-4a4e-984d-7f340daa5528-installation-pull-secrets\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.963345 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2a9b622f-21de-4a4e-984d-7f340daa5528-registry-tls\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.967016 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zpks\" (UniqueName: \"kubernetes.io/projected/2a9b622f-21de-4a4e-984d-7f340daa5528-kube-api-access-8zpks\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:27 crc kubenswrapper[4840]: I0129 12:11:27.967911 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2a9b622f-21de-4a4e-984d-7f340daa5528-bound-sa-token\") pod \"image-registry-66df7c8f76-mm9gz\" (UID: \"2a9b622f-21de-4a4e-984d-7f340daa5528\") " pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.042141 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.120522 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b"] Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.121601 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.126041 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.126629 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.126964 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.126701 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt"] Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.128368 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.128452 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.131483 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.133801 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.134056 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.134377 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.134622 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.134633 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.138281 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.138888 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.142195 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt"] Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.149252 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.157244 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b"] Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.252058 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bee5c283-6be0-46d0-a265-da9128083529-proxy-ca-bundles\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.252148 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv4ll\" (UniqueName: \"kubernetes.io/projected/d30e30e7-15b8-4d21-9896-b60413eb6d9e-kube-api-access-fv4ll\") pod \"route-controller-manager-797bd65847-q6zwt\" (UID: \"d30e30e7-15b8-4d21-9896-b60413eb6d9e\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.252428 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d30e30e7-15b8-4d21-9896-b60413eb6d9e-config\") pod \"route-controller-manager-797bd65847-q6zwt\" (UID: \"d30e30e7-15b8-4d21-9896-b60413eb6d9e\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.252538 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d30e30e7-15b8-4d21-9896-b60413eb6d9e-serving-cert\") pod \"route-controller-manager-797bd65847-q6zwt\" (UID: \"d30e30e7-15b8-4d21-9896-b60413eb6d9e\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.252644 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bee5c283-6be0-46d0-a265-da9128083529-config\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.252689 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bee5c283-6be0-46d0-a265-da9128083529-client-ca\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.252723 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bee5c283-6be0-46d0-a265-da9128083529-serving-cert\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.252796 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d30e30e7-15b8-4d21-9896-b60413eb6d9e-client-ca\") pod \"route-controller-manager-797bd65847-q6zwt\" (UID: \"d30e30e7-15b8-4d21-9896-b60413eb6d9e\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.252925 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2nk9\" (UniqueName: \"kubernetes.io/projected/bee5c283-6be0-46d0-a265-da9128083529-kube-api-access-s2nk9\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.339756 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-mm9gz"] Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.354106 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bee5c283-6be0-46d0-a265-da9128083529-proxy-ca-bundles\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.354172 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv4ll\" (UniqueName: \"kubernetes.io/projected/d30e30e7-15b8-4d21-9896-b60413eb6d9e-kube-api-access-fv4ll\") pod \"route-controller-manager-797bd65847-q6zwt\" (UID: \"d30e30e7-15b8-4d21-9896-b60413eb6d9e\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.354200 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d30e30e7-15b8-4d21-9896-b60413eb6d9e-config\") pod \"route-controller-manager-797bd65847-q6zwt\" (UID: \"d30e30e7-15b8-4d21-9896-b60413eb6d9e\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.354230 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d30e30e7-15b8-4d21-9896-b60413eb6d9e-serving-cert\") pod \"route-controller-manager-797bd65847-q6zwt\" (UID: \"d30e30e7-15b8-4d21-9896-b60413eb6d9e\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.354263 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bee5c283-6be0-46d0-a265-da9128083529-config\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.354289 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bee5c283-6be0-46d0-a265-da9128083529-client-ca\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.354310 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bee5c283-6be0-46d0-a265-da9128083529-serving-cert\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.354333 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d30e30e7-15b8-4d21-9896-b60413eb6d9e-client-ca\") pod \"route-controller-manager-797bd65847-q6zwt\" (UID: \"d30e30e7-15b8-4d21-9896-b60413eb6d9e\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.354353 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2nk9\" (UniqueName: \"kubernetes.io/projected/bee5c283-6be0-46d0-a265-da9128083529-kube-api-access-s2nk9\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.357094 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d30e30e7-15b8-4d21-9896-b60413eb6d9e-config\") pod \"route-controller-manager-797bd65847-q6zwt\" (UID: \"d30e30e7-15b8-4d21-9896-b60413eb6d9e\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.357280 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d30e30e7-15b8-4d21-9896-b60413eb6d9e-client-ca\") pod \"route-controller-manager-797bd65847-q6zwt\" (UID: \"d30e30e7-15b8-4d21-9896-b60413eb6d9e\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.358445 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bee5c283-6be0-46d0-a265-da9128083529-config\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.358613 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/bee5c283-6be0-46d0-a265-da9128083529-client-ca\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.360138 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d30e30e7-15b8-4d21-9896-b60413eb6d9e-serving-cert\") pod \"route-controller-manager-797bd65847-q6zwt\" (UID: \"d30e30e7-15b8-4d21-9896-b60413eb6d9e\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.363242 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bee5c283-6be0-46d0-a265-da9128083529-proxy-ca-bundles\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.363270 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bee5c283-6be0-46d0-a265-da9128083529-serving-cert\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.373859 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv4ll\" (UniqueName: \"kubernetes.io/projected/d30e30e7-15b8-4d21-9896-b60413eb6d9e-kube-api-access-fv4ll\") pod \"route-controller-manager-797bd65847-q6zwt\" (UID: \"d30e30e7-15b8-4d21-9896-b60413eb6d9e\") " pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.374197 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2nk9\" (UniqueName: \"kubernetes.io/projected/bee5c283-6be0-46d0-a265-da9128083529-kube-api-access-s2nk9\") pod \"controller-manager-6fc6dd89d5-hwq4b\" (UID: \"bee5c283-6be0-46d0-a265-da9128083529\") " pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.376320 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" event={"ID":"2a9b622f-21de-4a4e-984d-7f340daa5528","Type":"ContainerStarted","Data":"106ba37b8a0aae09c3b53730c2dc7ceb579ca152da3b74edbb56f53856b2253a"} Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.378112 4840 generic.go:334] "Generic (PLEG): container finished" podID="c6495f02-13cd-40e3-85d9-5b63d3691f0a" containerID="aba2d5fcc6365d444b722c2d9f1b3491c280cc084741c22b9e5555cf643d0f7f" exitCode=0 Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.378202 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78p7s" event={"ID":"c6495f02-13cd-40e3-85d9-5b63d3691f0a","Type":"ContainerDied","Data":"aba2d5fcc6365d444b722c2d9f1b3491c280cc084741c22b9e5555cf643d0f7f"} Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.378242 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78p7s" event={"ID":"c6495f02-13cd-40e3-85d9-5b63d3691f0a","Type":"ContainerStarted","Data":"1c3759e452994bf3506d845b008d2eb82a1114cb5027395bc9967673975f1c99"} Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.388731 4840 generic.go:334] "Generic (PLEG): container finished" podID="cd2d39b4-a2bb-4311-872e-a9591621717f" containerID="2447f5abf29c7916747c6008e4fb7039537f91da6f3b251b2f23209bb32c80fc" exitCode=0 Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.388785 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm45d" event={"ID":"cd2d39b4-a2bb-4311-872e-a9591621717f","Type":"ContainerDied","Data":"2447f5abf29c7916747c6008e4fb7039537f91da6f3b251b2f23209bb32c80fc"} Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.394615 4840 generic.go:334] "Generic (PLEG): container finished" podID="5eca2916-9f95-4cf4-b35f-7aebe5f09e19" containerID="7c88e2ff9b6e25ae7eb901c978d749a32e70e49bf33d5fb66366b5c788d66636" exitCode=0 Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.402817 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlf42" event={"ID":"5eca2916-9f95-4cf4-b35f-7aebe5f09e19","Type":"ContainerDied","Data":"7c88e2ff9b6e25ae7eb901c978d749a32e70e49bf33d5fb66366b5c788d66636"} Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.407706 4840 generic.go:334] "Generic (PLEG): container finished" podID="744fc57c-46df-4ef5-b1f4-2dceb4f52a66" containerID="4e226e15cbd78a671684cbd998ab77280018090609932a0bc06cfa427b8d3a6e" exitCode=0 Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.407811 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nt7tn" event={"ID":"744fc57c-46df-4ef5-b1f4-2dceb4f52a66","Type":"ContainerDied","Data":"4e226e15cbd78a671684cbd998ab77280018090609932a0bc06cfa427b8d3a6e"} Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.407850 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nt7tn" event={"ID":"744fc57c-46df-4ef5-b1f4-2dceb4f52a66","Type":"ContainerStarted","Data":"d3112b7c12dd86957c15afadee6434a55e08c5405f2157bfa3ada4c73aa7c4f9"} Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.444524 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.485361 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:28 crc kubenswrapper[4840]: E0129 12:11:28.515009 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 29 12:11:28 crc kubenswrapper[4840]: E0129 12:11:28.515202 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j777r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-78p7s_openshift-marketplace(c6495f02-13cd-40e3-85d9-5b63d3691f0a): ErrImagePull: initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)" logger="UnhandledError" Jan 29 12:11:28 crc kubenswrapper[4840]: E0129 12:11:28.517328 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"initializing source docker://registry.redhat.io/redhat/redhat-marketplace-index:v4.18: Requesting bearer token: invalid status code from registry 403 (Forbidden)\"" pod="openshift-marketplace/redhat-marketplace-78p7s" podUID="c6495f02-13cd-40e3-85d9-5b63d3691f0a" Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.721856 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b"] Jan 29 12:11:28 crc kubenswrapper[4840]: I0129 12:11:28.786846 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt"] Jan 29 12:11:28 crc kubenswrapper[4840]: W0129 12:11:28.794485 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd30e30e7_15b8_4d21_9896_b60413eb6d9e.slice/crio-066e8bc40f5c6da07368815c6b2896fca917a5e96329ce1ba378a8e20a366382 WatchSource:0}: Error finding container 066e8bc40f5c6da07368815c6b2896fca917a5e96329ce1ba378a8e20a366382: Status 404 returned error can't find the container with id 066e8bc40f5c6da07368815c6b2896fca917a5e96329ce1ba378a8e20a366382 Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.021213 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015cd172-e294-454b-b521-fa058d6bb518" path="/var/lib/kubelet/pods/015cd172-e294-454b-b521-fa058d6bb518/volumes" Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.031646 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018b4408-551c-4a0d-95cf-d1eaf9eab223" path="/var/lib/kubelet/pods/018b4408-551c-4a0d-95cf-d1eaf9eab223/volumes" Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.423109 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" event={"ID":"2a9b622f-21de-4a4e-984d-7f340daa5528","Type":"ContainerStarted","Data":"5f86543e21e4a1a3e9a7a5605cf6638e73dc042dab86d4218e5bd20fa1cfb44d"} Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.423729 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.429384 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dlf42" event={"ID":"5eca2916-9f95-4cf4-b35f-7aebe5f09e19","Type":"ContainerStarted","Data":"e544533fa97c2d2956eafde01deab8d60d4fb27a3fd00b2ab0b02149b1178360"} Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.435616 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" event={"ID":"bee5c283-6be0-46d0-a265-da9128083529","Type":"ContainerStarted","Data":"35a82954193dc4546ea1f92c11bc661dfe20ac3e4af90103bd2132bace963870"} Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.435680 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" event={"ID":"bee5c283-6be0-46d0-a265-da9128083529","Type":"ContainerStarted","Data":"e814e02a5c561282e0a58010b0edf911fcebd932acfea65fd3e9a829cb6d4ffa"} Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.435868 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.439640 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cm45d" event={"ID":"cd2d39b4-a2bb-4311-872e-a9591621717f","Type":"ContainerStarted","Data":"5deb874ad1b9cbd68d6710f79f409cf7bdbdfa92846a25628d699b69dca384b2"} Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.445316 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nt7tn" event={"ID":"744fc57c-46df-4ef5-b1f4-2dceb4f52a66","Type":"ContainerStarted","Data":"143a144268ab053cf789d59cb1d06019f4c53ddabc806962f08d51ad41ff2000"} Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.445795 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.448133 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" event={"ID":"d30e30e7-15b8-4d21-9896-b60413eb6d9e","Type":"ContainerStarted","Data":"eb0dd7847f6bea10ce726c5ab4483bb1063fcb77a3cdeb99e939f91bc07185c6"} Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.448278 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.448375 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" event={"ID":"d30e30e7-15b8-4d21-9896-b60413eb6d9e","Type":"ContainerStarted","Data":"066e8bc40f5c6da07368815c6b2896fca917a5e96329ce1ba378a8e20a366382"} Jan 29 12:11:29 crc kubenswrapper[4840]: E0129 12:11:29.450449 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-78p7s" podUID="c6495f02-13cd-40e3-85d9-5b63d3691f0a" Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.466734 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" podStartSLOduration=2.46670085 podStartE2EDuration="2.46670085s" podCreationTimestamp="2026-01-29 12:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:11:29.461281184 +0000 UTC m=+421.124261077" watchObservedRunningTime="2026-01-29 12:11:29.46670085 +0000 UTC m=+421.129680743" Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.646347 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dlf42" podStartSLOduration=3.048328111 podStartE2EDuration="5.646325663s" podCreationTimestamp="2026-01-29 12:11:24 +0000 UTC" firstStartedPulling="2026-01-29 12:11:26.298655338 +0000 UTC m=+417.961635231" lastFinishedPulling="2026-01-29 12:11:28.89665289 +0000 UTC m=+420.559632783" observedRunningTime="2026-01-29 12:11:29.585689301 +0000 UTC m=+421.248669194" watchObservedRunningTime="2026-01-29 12:11:29.646325663 +0000 UTC m=+421.309305556" Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.647972 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cm45d" podStartSLOduration=3.112990031 podStartE2EDuration="5.647962847s" podCreationTimestamp="2026-01-29 12:11:24 +0000 UTC" firstStartedPulling="2026-01-29 12:11:26.294421484 +0000 UTC m=+417.957401387" lastFinishedPulling="2026-01-29 12:11:28.82939431 +0000 UTC m=+420.492374203" observedRunningTime="2026-01-29 12:11:29.645186592 +0000 UTC m=+421.308166505" watchObservedRunningTime="2026-01-29 12:11:29.647962847 +0000 UTC m=+421.310942740" Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.711510 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6fc6dd89d5-hwq4b" podStartSLOduration=3.711477617 podStartE2EDuration="3.711477617s" podCreationTimestamp="2026-01-29 12:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:11:29.708002002 +0000 UTC m=+421.370981905" watchObservedRunningTime="2026-01-29 12:11:29.711477617 +0000 UTC m=+421.374457520" Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.744211 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" podStartSLOduration=3.744190126 podStartE2EDuration="3.744190126s" podCreationTimestamp="2026-01-29 12:11:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:11:29.743735655 +0000 UTC m=+421.406715548" watchObservedRunningTime="2026-01-29 12:11:29.744190126 +0000 UTC m=+421.407170019" Jan 29 12:11:29 crc kubenswrapper[4840]: I0129 12:11:29.916294 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-797bd65847-q6zwt" Jan 29 12:11:30 crc kubenswrapper[4840]: I0129 12:11:30.455436 4840 generic.go:334] "Generic (PLEG): container finished" podID="744fc57c-46df-4ef5-b1f4-2dceb4f52a66" containerID="143a144268ab053cf789d59cb1d06019f4c53ddabc806962f08d51ad41ff2000" exitCode=0 Jan 29 12:11:30 crc kubenswrapper[4840]: I0129 12:11:30.456232 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nt7tn" event={"ID":"744fc57c-46df-4ef5-b1f4-2dceb4f52a66","Type":"ContainerDied","Data":"143a144268ab053cf789d59cb1d06019f4c53ddabc806962f08d51ad41ff2000"} Jan 29 12:11:31 crc kubenswrapper[4840]: I0129 12:11:31.473996 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nt7tn" event={"ID":"744fc57c-46df-4ef5-b1f4-2dceb4f52a66","Type":"ContainerStarted","Data":"1c20a9e1ce92726d4c1bd8130226260dc15ef1fc6648e4417905d21be131c641"} Jan 29 12:11:31 crc kubenswrapper[4840]: I0129 12:11:31.502643 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nt7tn" podStartSLOduration=1.974540326 podStartE2EDuration="4.502606785s" podCreationTimestamp="2026-01-29 12:11:27 +0000 UTC" firstStartedPulling="2026-01-29 12:11:28.409724407 +0000 UTC m=+420.072704300" lastFinishedPulling="2026-01-29 12:11:30.937790866 +0000 UTC m=+422.600770759" observedRunningTime="2026-01-29 12:11:31.500323433 +0000 UTC m=+423.163303326" watchObservedRunningTime="2026-01-29 12:11:31.502606785 +0000 UTC m=+423.165586678" Jan 29 12:11:34 crc kubenswrapper[4840]: I0129 12:11:34.883802 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:34 crc kubenswrapper[4840]: I0129 12:11:34.884834 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:34 crc kubenswrapper[4840]: I0129 12:11:34.933862 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:35 crc kubenswrapper[4840]: I0129 12:11:35.085028 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:35 crc kubenswrapper[4840]: I0129 12:11:35.085161 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:35 crc kubenswrapper[4840]: I0129 12:11:35.132286 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:35 crc kubenswrapper[4840]: I0129 12:11:35.550089 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cm45d" Jan 29 12:11:35 crc kubenswrapper[4840]: I0129 12:11:35.551068 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dlf42" Jan 29 12:11:37 crc kubenswrapper[4840]: I0129 12:11:37.486656 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:37 crc kubenswrapper[4840]: I0129 12:11:37.486852 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:37 crc kubenswrapper[4840]: I0129 12:11:37.530595 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:38 crc kubenswrapper[4840]: I0129 12:11:38.560831 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nt7tn" Jan 29 12:11:46 crc kubenswrapper[4840]: I0129 12:11:46.564893 4840 generic.go:334] "Generic (PLEG): container finished" podID="c6495f02-13cd-40e3-85d9-5b63d3691f0a" containerID="1729ff236b81191cb2c8a915eee14903377820b020266a54946ccab7c379ea66" exitCode=0 Jan 29 12:11:46 crc kubenswrapper[4840]: I0129 12:11:46.566036 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78p7s" event={"ID":"c6495f02-13cd-40e3-85d9-5b63d3691f0a","Type":"ContainerDied","Data":"1729ff236b81191cb2c8a915eee14903377820b020266a54946ccab7c379ea66"} Jan 29 12:11:48 crc kubenswrapper[4840]: I0129 12:11:48.051154 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-mm9gz" Jan 29 12:11:48 crc kubenswrapper[4840]: I0129 12:11:48.138168 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8fwb9"] Jan 29 12:11:48 crc kubenswrapper[4840]: I0129 12:11:48.781044 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-78p7s" event={"ID":"c6495f02-13cd-40e3-85d9-5b63d3691f0a","Type":"ContainerStarted","Data":"5fea50dd18354a860936a7cd09a5866221769f306817e0b4233822d4fa39cb27"} Jan 29 12:11:48 crc kubenswrapper[4840]: I0129 12:11:48.810536 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-78p7s" podStartSLOduration=4.20728102 podStartE2EDuration="22.810510124s" podCreationTimestamp="2026-01-29 12:11:26 +0000 UTC" firstStartedPulling="2026-01-29 12:11:28.380911102 +0000 UTC m=+420.043890995" lastFinishedPulling="2026-01-29 12:11:46.984140206 +0000 UTC m=+438.647120099" observedRunningTime="2026-01-29 12:11:48.808857078 +0000 UTC m=+440.471836971" watchObservedRunningTime="2026-01-29 12:11:48.810510124 +0000 UTC m=+440.473490017" Jan 29 12:11:57 crc kubenswrapper[4840]: I0129 12:11:57.286105 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:57 crc kubenswrapper[4840]: I0129 12:11:57.287285 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:57 crc kubenswrapper[4840]: I0129 12:11:57.331152 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:11:57 crc kubenswrapper[4840]: I0129 12:11:57.886134 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-78p7s" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.191434 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" podUID="01215522-37e6-4461-91d1-f695896d6ede" containerName="registry" containerID="cri-o://20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892" gracePeriod=30 Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.617340 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.734930 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"01215522-37e6-4461-91d1-f695896d6ede\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.735074 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01215522-37e6-4461-91d1-f695896d6ede-trusted-ca\") pod \"01215522-37e6-4461-91d1-f695896d6ede\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.735103 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01215522-37e6-4461-91d1-f695896d6ede-installation-pull-secrets\") pod \"01215522-37e6-4461-91d1-f695896d6ede\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.735206 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5fqb\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-kube-api-access-b5fqb\") pod \"01215522-37e6-4461-91d1-f695896d6ede\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.735234 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01215522-37e6-4461-91d1-f695896d6ede-ca-trust-extracted\") pod \"01215522-37e6-4461-91d1-f695896d6ede\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.735254 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-registry-tls\") pod \"01215522-37e6-4461-91d1-f695896d6ede\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.735276 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-bound-sa-token\") pod \"01215522-37e6-4461-91d1-f695896d6ede\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.735334 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01215522-37e6-4461-91d1-f695896d6ede-registry-certificates\") pod \"01215522-37e6-4461-91d1-f695896d6ede\" (UID: \"01215522-37e6-4461-91d1-f695896d6ede\") " Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.736389 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01215522-37e6-4461-91d1-f695896d6ede-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "01215522-37e6-4461-91d1-f695896d6ede" (UID: "01215522-37e6-4461-91d1-f695896d6ede"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.736832 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01215522-37e6-4461-91d1-f695896d6ede-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "01215522-37e6-4461-91d1-f695896d6ede" (UID: "01215522-37e6-4461-91d1-f695896d6ede"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.743692 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "01215522-37e6-4461-91d1-f695896d6ede" (UID: "01215522-37e6-4461-91d1-f695896d6ede"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.744190 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-kube-api-access-b5fqb" (OuterVolumeSpecName: "kube-api-access-b5fqb") pod "01215522-37e6-4461-91d1-f695896d6ede" (UID: "01215522-37e6-4461-91d1-f695896d6ede"). InnerVolumeSpecName "kube-api-access-b5fqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.744507 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "01215522-37e6-4461-91d1-f695896d6ede" (UID: "01215522-37e6-4461-91d1-f695896d6ede"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.745782 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01215522-37e6-4461-91d1-f695896d6ede-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "01215522-37e6-4461-91d1-f695896d6ede" (UID: "01215522-37e6-4461-91d1-f695896d6ede"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.751036 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "01215522-37e6-4461-91d1-f695896d6ede" (UID: "01215522-37e6-4461-91d1-f695896d6ede"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.755466 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01215522-37e6-4461-91d1-f695896d6ede-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "01215522-37e6-4461-91d1-f695896d6ede" (UID: "01215522-37e6-4461-91d1-f695896d6ede"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.837548 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/01215522-37e6-4461-91d1-f695896d6ede-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.837620 4840 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/01215522-37e6-4461-91d1-f695896d6ede-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.837637 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5fqb\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-kube-api-access-b5fqb\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.837649 4840 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/01215522-37e6-4461-91d1-f695896d6ede-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.837666 4840 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.837677 4840 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/01215522-37e6-4461-91d1-f695896d6ede-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.837690 4840 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/01215522-37e6-4461-91d1-f695896d6ede-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.959405 4840 generic.go:334] "Generic (PLEG): container finished" podID="01215522-37e6-4461-91d1-f695896d6ede" containerID="20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892" exitCode=0 Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.959464 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" event={"ID":"01215522-37e6-4461-91d1-f695896d6ede","Type":"ContainerDied","Data":"20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892"} Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.959501 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" event={"ID":"01215522-37e6-4461-91d1-f695896d6ede","Type":"ContainerDied","Data":"eefb02e2593812958d02fe9ed5fdb2692940ecda96f9d6fde565e6ca19a0cb83"} Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.959525 4840 scope.go:117] "RemoveContainer" containerID="20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.959541 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-8fwb9" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.975040 4840 scope.go:117] "RemoveContainer" containerID="20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892" Jan 29 12:12:13 crc kubenswrapper[4840]: E0129 12:12:13.975496 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892\": container with ID starting with 20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892 not found: ID does not exist" containerID="20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892" Jan 29 12:12:13 crc kubenswrapper[4840]: I0129 12:12:13.975545 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892"} err="failed to get container status \"20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892\": rpc error: code = NotFound desc = could not find container \"20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892\": container with ID starting with 20b996613aa02d24bff0bb019d233c869858bb821d6f8fd7a265f7cad103d892 not found: ID does not exist" Jan 29 12:12:14 crc kubenswrapper[4840]: I0129 12:12:14.010096 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8fwb9"] Jan 29 12:12:14 crc kubenswrapper[4840]: I0129 12:12:14.015716 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-8fwb9"] Jan 29 12:12:15 crc kubenswrapper[4840]: I0129 12:12:15.008567 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01215522-37e6-4461-91d1-f695896d6ede" path="/var/lib/kubelet/pods/01215522-37e6-4461-91d1-f695896d6ede/volumes" Jan 29 12:13:53 crc kubenswrapper[4840]: I0129 12:13:53.522596 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:13:53 crc kubenswrapper[4840]: I0129 12:13:53.526081 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:14:23 crc kubenswrapper[4840]: I0129 12:14:23.521998 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:14:23 crc kubenswrapper[4840]: I0129 12:14:23.522525 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:14:53 crc kubenswrapper[4840]: I0129 12:14:53.521874 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:14:53 crc kubenswrapper[4840]: I0129 12:14:53.522434 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:14:53 crc kubenswrapper[4840]: I0129 12:14:53.523365 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:14:53 crc kubenswrapper[4840]: I0129 12:14:53.523828 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e10267b8e4133aafc62b40dd54cb6e7f2b7971650f1dabb0aab935dd2a060868"} pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:14:53 crc kubenswrapper[4840]: I0129 12:14:53.523880 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" containerID="cri-o://e10267b8e4133aafc62b40dd54cb6e7f2b7971650f1dabb0aab935dd2a060868" gracePeriod=600 Jan 29 12:14:54 crc kubenswrapper[4840]: I0129 12:14:54.449416 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerID="e10267b8e4133aafc62b40dd54cb6e7f2b7971650f1dabb0aab935dd2a060868" exitCode=0 Jan 29 12:14:54 crc kubenswrapper[4840]: I0129 12:14:54.449550 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerDied","Data":"e10267b8e4133aafc62b40dd54cb6e7f2b7971650f1dabb0aab935dd2a060868"} Jan 29 12:14:54 crc kubenswrapper[4840]: I0129 12:14:54.450063 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"5eea126e892ef85191aa623b40e4a1a85f8aad602ec61c374067a4fa5bc21cd1"} Jan 29 12:14:54 crc kubenswrapper[4840]: I0129 12:14:54.450102 4840 scope.go:117] "RemoveContainer" containerID="5ff863e72d8fdd89fd574ba3941e895f5b60a6ec9ccb97dd5b92e67bce246615" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.178433 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc"] Jan 29 12:15:00 crc kubenswrapper[4840]: E0129 12:15:00.179531 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01215522-37e6-4461-91d1-f695896d6ede" containerName="registry" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.179552 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="01215522-37e6-4461-91d1-f695896d6ede" containerName="registry" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.179666 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="01215522-37e6-4461-91d1-f695896d6ede" containerName="registry" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.180229 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.185574 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.190895 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.191432 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc"] Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.217904 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac3fae12-df97-4e5b-85a9-a609f7291cdb-config-volume\") pod \"collect-profiles-29494815-8fqzc\" (UID: \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.217969 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6nsj\" (UniqueName: \"kubernetes.io/projected/ac3fae12-df97-4e5b-85a9-a609f7291cdb-kube-api-access-p6nsj\") pod \"collect-profiles-29494815-8fqzc\" (UID: \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.218006 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac3fae12-df97-4e5b-85a9-a609f7291cdb-secret-volume\") pod \"collect-profiles-29494815-8fqzc\" (UID: \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.319030 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac3fae12-df97-4e5b-85a9-a609f7291cdb-config-volume\") pod \"collect-profiles-29494815-8fqzc\" (UID: \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.319084 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6nsj\" (UniqueName: \"kubernetes.io/projected/ac3fae12-df97-4e5b-85a9-a609f7291cdb-kube-api-access-p6nsj\") pod \"collect-profiles-29494815-8fqzc\" (UID: \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.319126 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac3fae12-df97-4e5b-85a9-a609f7291cdb-secret-volume\") pod \"collect-profiles-29494815-8fqzc\" (UID: \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.320345 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac3fae12-df97-4e5b-85a9-a609f7291cdb-config-volume\") pod \"collect-profiles-29494815-8fqzc\" (UID: \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.325934 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac3fae12-df97-4e5b-85a9-a609f7291cdb-secret-volume\") pod \"collect-profiles-29494815-8fqzc\" (UID: \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.347089 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6nsj\" (UniqueName: \"kubernetes.io/projected/ac3fae12-df97-4e5b-85a9-a609f7291cdb-kube-api-access-p6nsj\") pod \"collect-profiles-29494815-8fqzc\" (UID: \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.501857 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:15:00 crc kubenswrapper[4840]: I0129 12:15:00.758337 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc"] Jan 29 12:15:01 crc kubenswrapper[4840]: I0129 12:15:01.494220 4840 generic.go:334] "Generic (PLEG): container finished" podID="ac3fae12-df97-4e5b-85a9-a609f7291cdb" containerID="2c8d031ff36d2eae770e8d8da93ebda169cb7398852d47ad4331c330ae0a5576" exitCode=0 Jan 29 12:15:01 crc kubenswrapper[4840]: I0129 12:15:01.494370 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" event={"ID":"ac3fae12-df97-4e5b-85a9-a609f7291cdb","Type":"ContainerDied","Data":"2c8d031ff36d2eae770e8d8da93ebda169cb7398852d47ad4331c330ae0a5576"} Jan 29 12:15:01 crc kubenswrapper[4840]: I0129 12:15:01.494718 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" event={"ID":"ac3fae12-df97-4e5b-85a9-a609f7291cdb","Type":"ContainerStarted","Data":"000156a32c14586f4457972c74868e1e1e5a5d50f90b334c33613bedb357f1b7"} Jan 29 12:15:02 crc kubenswrapper[4840]: I0129 12:15:02.709493 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:15:02 crc kubenswrapper[4840]: I0129 12:15:02.754317 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6nsj\" (UniqueName: \"kubernetes.io/projected/ac3fae12-df97-4e5b-85a9-a609f7291cdb-kube-api-access-p6nsj\") pod \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\" (UID: \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\") " Jan 29 12:15:02 crc kubenswrapper[4840]: I0129 12:15:02.754396 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac3fae12-df97-4e5b-85a9-a609f7291cdb-config-volume\") pod \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\" (UID: \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\") " Jan 29 12:15:02 crc kubenswrapper[4840]: I0129 12:15:02.754519 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac3fae12-df97-4e5b-85a9-a609f7291cdb-secret-volume\") pod \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\" (UID: \"ac3fae12-df97-4e5b-85a9-a609f7291cdb\") " Jan 29 12:15:02 crc kubenswrapper[4840]: I0129 12:15:02.757408 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac3fae12-df97-4e5b-85a9-a609f7291cdb-config-volume" (OuterVolumeSpecName: "config-volume") pod "ac3fae12-df97-4e5b-85a9-a609f7291cdb" (UID: "ac3fae12-df97-4e5b-85a9-a609f7291cdb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:15:02 crc kubenswrapper[4840]: I0129 12:15:02.762438 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3fae12-df97-4e5b-85a9-a609f7291cdb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ac3fae12-df97-4e5b-85a9-a609f7291cdb" (UID: "ac3fae12-df97-4e5b-85a9-a609f7291cdb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:15:02 crc kubenswrapper[4840]: I0129 12:15:02.762529 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac3fae12-df97-4e5b-85a9-a609f7291cdb-kube-api-access-p6nsj" (OuterVolumeSpecName: "kube-api-access-p6nsj") pod "ac3fae12-df97-4e5b-85a9-a609f7291cdb" (UID: "ac3fae12-df97-4e5b-85a9-a609f7291cdb"). InnerVolumeSpecName "kube-api-access-p6nsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:15:02 crc kubenswrapper[4840]: I0129 12:15:02.855878 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6nsj\" (UniqueName: \"kubernetes.io/projected/ac3fae12-df97-4e5b-85a9-a609f7291cdb-kube-api-access-p6nsj\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:02 crc kubenswrapper[4840]: I0129 12:15:02.855971 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac3fae12-df97-4e5b-85a9-a609f7291cdb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:02 crc kubenswrapper[4840]: I0129 12:15:02.855985 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ac3fae12-df97-4e5b-85a9-a609f7291cdb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:15:03 crc kubenswrapper[4840]: I0129 12:15:03.511108 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" event={"ID":"ac3fae12-df97-4e5b-85a9-a609f7291cdb","Type":"ContainerDied","Data":"000156a32c14586f4457972c74868e1e1e5a5d50f90b334c33613bedb357f1b7"} Jan 29 12:15:03 crc kubenswrapper[4840]: I0129 12:15:03.511182 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="000156a32c14586f4457972c74868e1e1e5a5d50f90b334c33613bedb357f1b7" Jan 29 12:15:03 crc kubenswrapper[4840]: I0129 12:15:03.511222 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.212565 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dssln"] Jan 29 12:16:38 crc kubenswrapper[4840]: E0129 12:16:38.213846 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3fae12-df97-4e5b-85a9-a609f7291cdb" containerName="collect-profiles" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.213864 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3fae12-df97-4e5b-85a9-a609f7291cdb" containerName="collect-profiles" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.213994 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3fae12-df97-4e5b-85a9-a609f7291cdb" containerName="collect-profiles" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.214546 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dssln" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.216635 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.216695 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.217386 4840 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-g6pn2" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.225141 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-t8kjb"] Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.226431 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-t8kjb" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.228483 4840 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7mbds" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.228894 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dssln"] Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.238990 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gxz5g"] Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.240227 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-gxz5g" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.243035 4840 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-82zx9" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.245907 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-t8kjb"] Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.250326 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gxz5g"] Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.337145 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x8xc\" (UniqueName: \"kubernetes.io/projected/5f4c8ed2-e3ec-4ac9-86a2-1fdb4b3cf410-kube-api-access-2x8xc\") pod \"cert-manager-858654f9db-t8kjb\" (UID: \"5f4c8ed2-e3ec-4ac9-86a2-1fdb4b3cf410\") " pod="cert-manager/cert-manager-858654f9db-t8kjb" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.337268 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs4vp\" (UniqueName: \"kubernetes.io/projected/4952f2c9-a631-40ba-ac0c-96e4f64b52dd-kube-api-access-gs4vp\") pod \"cert-manager-webhook-687f57d79b-gxz5g\" (UID: \"4952f2c9-a631-40ba-ac0c-96e4f64b52dd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gxz5g" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.337359 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jctvv\" (UniqueName: \"kubernetes.io/projected/204fbe16-f032-48b3-8e82-8b302a5d0ef1-kube-api-access-jctvv\") pod \"cert-manager-cainjector-cf98fcc89-dssln\" (UID: \"204fbe16-f032-48b3-8e82-8b302a5d0ef1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dssln" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.438887 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jctvv\" (UniqueName: \"kubernetes.io/projected/204fbe16-f032-48b3-8e82-8b302a5d0ef1-kube-api-access-jctvv\") pod \"cert-manager-cainjector-cf98fcc89-dssln\" (UID: \"204fbe16-f032-48b3-8e82-8b302a5d0ef1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dssln" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.438979 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x8xc\" (UniqueName: \"kubernetes.io/projected/5f4c8ed2-e3ec-4ac9-86a2-1fdb4b3cf410-kube-api-access-2x8xc\") pod \"cert-manager-858654f9db-t8kjb\" (UID: \"5f4c8ed2-e3ec-4ac9-86a2-1fdb4b3cf410\") " pod="cert-manager/cert-manager-858654f9db-t8kjb" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.439043 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs4vp\" (UniqueName: \"kubernetes.io/projected/4952f2c9-a631-40ba-ac0c-96e4f64b52dd-kube-api-access-gs4vp\") pod \"cert-manager-webhook-687f57d79b-gxz5g\" (UID: \"4952f2c9-a631-40ba-ac0c-96e4f64b52dd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gxz5g" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.461638 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x8xc\" (UniqueName: \"kubernetes.io/projected/5f4c8ed2-e3ec-4ac9-86a2-1fdb4b3cf410-kube-api-access-2x8xc\") pod \"cert-manager-858654f9db-t8kjb\" (UID: \"5f4c8ed2-e3ec-4ac9-86a2-1fdb4b3cf410\") " pod="cert-manager/cert-manager-858654f9db-t8kjb" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.462328 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jctvv\" (UniqueName: \"kubernetes.io/projected/204fbe16-f032-48b3-8e82-8b302a5d0ef1-kube-api-access-jctvv\") pod \"cert-manager-cainjector-cf98fcc89-dssln\" (UID: \"204fbe16-f032-48b3-8e82-8b302a5d0ef1\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-dssln" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.462459 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs4vp\" (UniqueName: \"kubernetes.io/projected/4952f2c9-a631-40ba-ac0c-96e4f64b52dd-kube-api-access-gs4vp\") pod \"cert-manager-webhook-687f57d79b-gxz5g\" (UID: \"4952f2c9-a631-40ba-ac0c-96e4f64b52dd\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gxz5g" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.536317 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dssln" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.547638 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-t8kjb" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.556177 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-gxz5g" Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.898481 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-dssln"] Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.901628 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:16:38 crc kubenswrapper[4840]: I0129 12:16:38.939104 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gxz5g"] Jan 29 12:16:39 crc kubenswrapper[4840]: I0129 12:16:39.154437 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-gxz5g" event={"ID":"4952f2c9-a631-40ba-ac0c-96e4f64b52dd","Type":"ContainerStarted","Data":"cc194705183900a2d9bbb08cd0a7605f9834a7fe7c49f8ae93ef2ea8e7bc0aef"} Jan 29 12:16:39 crc kubenswrapper[4840]: I0129 12:16:39.155824 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dssln" event={"ID":"204fbe16-f032-48b3-8e82-8b302a5d0ef1","Type":"ContainerStarted","Data":"b0271932f387bdebcdcf0e7aa9bafbb44f5f690e920d1a588edf4a7f3f39a094"} Jan 29 12:16:39 crc kubenswrapper[4840]: I0129 12:16:39.160560 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-t8kjb"] Jan 29 12:16:40 crc kubenswrapper[4840]: I0129 12:16:40.168984 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-t8kjb" event={"ID":"5f4c8ed2-e3ec-4ac9-86a2-1fdb4b3cf410","Type":"ContainerStarted","Data":"ceb18bcd575af44c05324102ed163a2ba5b600dcacf25c555b0867ec26262f5e"} Jan 29 12:16:43 crc kubenswrapper[4840]: I0129 12:16:43.187631 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dssln" event={"ID":"204fbe16-f032-48b3-8e82-8b302a5d0ef1","Type":"ContainerStarted","Data":"b3c4fc7845c4054ed5148d3fed13cfb79a5fe7346cf2491f8a3d1a1a82c67672"} Jan 29 12:16:43 crc kubenswrapper[4840]: I0129 12:16:43.191988 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-t8kjb" event={"ID":"5f4c8ed2-e3ec-4ac9-86a2-1fdb4b3cf410","Type":"ContainerStarted","Data":"8d2dc8a47a4c7adf510e22cd34ae20074e37177db0d0fcaf961729ef285dc37c"} Jan 29 12:16:43 crc kubenswrapper[4840]: I0129 12:16:43.194752 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-gxz5g" event={"ID":"4952f2c9-a631-40ba-ac0c-96e4f64b52dd","Type":"ContainerStarted","Data":"9c193864ee45f7583d7b76c6a3506dd69f00c08d9332e54fcc07d3e780300ec9"} Jan 29 12:16:43 crc kubenswrapper[4840]: I0129 12:16:43.195234 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-gxz5g" Jan 29 12:16:43 crc kubenswrapper[4840]: I0129 12:16:43.210642 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-dssln" podStartSLOduration=1.246942534 podStartE2EDuration="5.210617046s" podCreationTimestamp="2026-01-29 12:16:38 +0000 UTC" firstStartedPulling="2026-01-29 12:16:38.90139757 +0000 UTC m=+730.564377463" lastFinishedPulling="2026-01-29 12:16:42.865072062 +0000 UTC m=+734.528051975" observedRunningTime="2026-01-29 12:16:43.209313352 +0000 UTC m=+734.872293255" watchObservedRunningTime="2026-01-29 12:16:43.210617046 +0000 UTC m=+734.873596939" Jan 29 12:16:43 crc kubenswrapper[4840]: I0129 12:16:43.239116 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-t8kjb" podStartSLOduration=1.514404464 podStartE2EDuration="5.239091739s" podCreationTimestamp="2026-01-29 12:16:38 +0000 UTC" firstStartedPulling="2026-01-29 12:16:39.162466181 +0000 UTC m=+730.825446074" lastFinishedPulling="2026-01-29 12:16:42.887153456 +0000 UTC m=+734.550133349" observedRunningTime="2026-01-29 12:16:43.236618324 +0000 UTC m=+734.899598237" watchObservedRunningTime="2026-01-29 12:16:43.239091739 +0000 UTC m=+734.902071632" Jan 29 12:16:43 crc kubenswrapper[4840]: I0129 12:16:43.253155 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-gxz5g" podStartSLOduration=1.316528292 podStartE2EDuration="5.253117819s" podCreationTimestamp="2026-01-29 12:16:38 +0000 UTC" firstStartedPulling="2026-01-29 12:16:38.949105861 +0000 UTC m=+730.612085754" lastFinishedPulling="2026-01-29 12:16:42.885695388 +0000 UTC m=+734.548675281" observedRunningTime="2026-01-29 12:16:43.2493518 +0000 UTC m=+734.912331693" watchObservedRunningTime="2026-01-29 12:16:43.253117819 +0000 UTC m=+734.916097712" Jan 29 12:16:48 crc kubenswrapper[4840]: I0129 12:16:48.560811 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-gxz5g" Jan 29 12:16:53 crc kubenswrapper[4840]: I0129 12:16:53.522169 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:16:53 crc kubenswrapper[4840]: I0129 12:16:53.522532 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.534075 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vl4fj"] Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.534856 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovn-controller" containerID="cri-o://da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef" gracePeriod=30 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.534927 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="nbdb" containerID="cri-o://ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c" gracePeriod=30 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.534997 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="sbdb" containerID="cri-o://527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219" gracePeriod=30 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.535061 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="kube-rbac-proxy-node" containerID="cri-o://052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62" gracePeriod=30 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.535142 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="northd" containerID="cri-o://046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a" gracePeriod=30 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.535014 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovn-acl-logging" containerID="cri-o://075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c" gracePeriod=30 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.535195 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5" gracePeriod=30 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.590724 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" containerID="cri-o://3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc" gracePeriod=30 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.813960 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovnkube-controller/3.log" Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.816216 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovn-acl-logging/0.log" Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.816648 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovn-controller/0.log" Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817047 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc" exitCode=0 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817080 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219" exitCode=0 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817092 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c" exitCode=0 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817101 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5" exitCode=0 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817108 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62" exitCode=0 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817116 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c" exitCode=143 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817124 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef" exitCode=143 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817127 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc"} Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817163 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219"} Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817176 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c"} Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817185 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5"} Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817193 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62"} Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817201 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c"} Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817210 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef"} Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.817226 4840 scope.go:117] "RemoveContainer" containerID="8ef8c3ccfaad8b497f040527313d56380033dfc01b18443a733413994f37e548" Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.818818 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zc5r_d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969/kube-multus/2.log" Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.819243 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zc5r_d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969/kube-multus/1.log" Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.819287 4840 generic.go:334] "Generic (PLEG): container finished" podID="d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969" containerID="2a2fc0da70c75ed38629049bf5f40c727f55fba7a52031ca48c5e3d8ada6f6fe" exitCode=2 Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.819343 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zc5r" event={"ID":"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969","Type":"ContainerDied","Data":"2a2fc0da70c75ed38629049bf5f40c727f55fba7a52031ca48c5e3d8ada6f6fe"} Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.819962 4840 scope.go:117] "RemoveContainer" containerID="2a2fc0da70c75ed38629049bf5f40c727f55fba7a52031ca48c5e3d8ada6f6fe" Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.869768 4840 scope.go:117] "RemoveContainer" containerID="ff435f9ad41605de7d8db59d05550dd1a331288be6ba4826196a970d5af81a06" Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.954687 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovn-acl-logging/0.log" Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.955281 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovn-controller/0.log" Jan 29 12:16:56 crc kubenswrapper[4840]: I0129 12:16:56.955829 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013121 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-d826q"] Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013297 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovn-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013308 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovn-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013314 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="northd" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013321 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="northd" Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013334 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="nbdb" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013340 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="nbdb" Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013350 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="kubecfg-setup" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013357 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="kubecfg-setup" Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013364 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovn-acl-logging" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013370 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovn-acl-logging" Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013377 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013383 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013395 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="kube-rbac-proxy-node" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013401 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="kube-rbac-proxy-node" Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013409 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013414 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013424 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="sbdb" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013431 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="sbdb" Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013444 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013450 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013458 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013464 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013471 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013477 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: E0129 12:16:57.013483 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013488 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013573 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="kube-rbac-proxy-node" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013584 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="sbdb" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013594 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="northd" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013604 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="kube-rbac-proxy-ovn-metrics" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013610 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013617 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovn-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013624 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013630 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovn-acl-logging" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013640 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="nbdb" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013647 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013654 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.013815 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" containerName="ovnkube-controller" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.015593 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.092699 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-run-netns\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.092801 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-cni-netd\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.092819 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.092902 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.092936 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-log-socket\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.092977 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-log-socket" (OuterVolumeSpecName: "log-socket") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.092982 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-systemd-units\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093006 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093042 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-var-lib-openvswitch\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093089 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-config\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093121 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-env-overrides\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093161 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-systemd\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093170 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093186 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-var-lib-cni-networks-ovn-kubernetes\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093224 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-etc-openvswitch\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093256 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4wbn\" (UniqueName: \"kubernetes.io/projected/b331ae03-7000-435b-8cb4-65da0c67d876-kube-api-access-n4wbn\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093285 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b331ae03-7000-435b-8cb4-65da0c67d876-ovn-node-metrics-cert\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093319 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-node-log\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093344 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-openvswitch\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093373 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-script-lib\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093393 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-cni-bin\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093426 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-ovn\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093472 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-slash\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093496 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-kubelet\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093527 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-run-ovn-kubernetes\") pod \"b331ae03-7000-435b-8cb4-65da0c67d876\" (UID: \"b331ae03-7000-435b-8cb4-65da0c67d876\") " Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093316 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093348 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093374 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-node-log" (OuterVolumeSpecName: "node-log") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093586 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093620 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-slash" (OuterVolumeSpecName: "host-slash") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093692 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093640 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093698 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093665 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.093722 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094053 4840 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-slash\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094071 4840 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094085 4840 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094098 4840 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094102 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094110 4840 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094224 4840 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-log-socket\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094240 4840 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094254 4840 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094269 4840 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094281 4840 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094298 4840 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094310 4840 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-node-log\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094321 4840 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094335 4840 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094345 4840 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.094711 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.098601 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b331ae03-7000-435b-8cb4-65da0c67d876-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.099601 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b331ae03-7000-435b-8cb4-65da0c67d876-kube-api-access-n4wbn" (OuterVolumeSpecName: "kube-api-access-n4wbn") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "kube-api-access-n4wbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.106457 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "b331ae03-7000-435b-8cb4-65da0c67d876" (UID: "b331ae03-7000-435b-8cb4-65da0c67d876"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195144 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-cni-bin\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195191 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195212 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-var-lib-openvswitch\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195239 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-systemd-units\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195257 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-run-systemd\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195272 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-run-ovn\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195290 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-slash\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195302 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-run-netns\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195320 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-env-overrides\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195366 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-run-ovn-kubernetes\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195386 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-ovnkube-script-lib\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195407 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-kubelet\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195425 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-etc-openvswitch\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195443 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-log-socket\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195465 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-cni-netd\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195481 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-node-log\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195498 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-run-openvswitch\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195515 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-ovn-node-metrics-cert\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195537 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-ovnkube-config\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195555 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgwqv\" (UniqueName: \"kubernetes.io/projected/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-kube-api-access-pgwqv\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195589 4840 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195599 4840 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/b331ae03-7000-435b-8cb4-65da0c67d876-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195609 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4wbn\" (UniqueName: \"kubernetes.io/projected/b331ae03-7000-435b-8cb4-65da0c67d876-kube-api-access-n4wbn\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195618 4840 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b331ae03-7000-435b-8cb4-65da0c67d876-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.195627 4840 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/b331ae03-7000-435b-8cb4-65da0c67d876-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.296870 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-ovnkube-config\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.296919 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgwqv\" (UniqueName: \"kubernetes.io/projected/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-kube-api-access-pgwqv\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.296957 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-cni-bin\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.296976 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297015 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-var-lib-openvswitch\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297036 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-systemd-units\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297059 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-run-systemd\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297079 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-run-ovn\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297098 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-run-netns\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297112 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-slash\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297133 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-env-overrides\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297159 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-run-ovn-kubernetes\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297174 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-ovnkube-script-lib\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297192 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-kubelet\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297211 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-etc-openvswitch\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297232 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-log-socket\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297271 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-cni-netd\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297303 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-node-log\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297326 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-run-openvswitch\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297349 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-ovn-node-metrics-cert\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.297971 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-slash\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298038 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-run-ovn\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298046 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-run-systemd\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298066 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-run-netns\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298111 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298115 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-systemd-units\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298139 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-etc-openvswitch\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298159 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-run-ovn-kubernetes\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298293 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-kubelet\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298303 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-cni-netd\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298330 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-node-log\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298357 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-log-socket\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298360 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-run-openvswitch\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298361 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-var-lib-openvswitch\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298753 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-ovnkube-script-lib\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298785 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-ovnkube-config\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298829 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-env-overrides\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.298858 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-host-cni-bin\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.301076 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-ovn-node-metrics-cert\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.313474 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgwqv\" (UniqueName: \"kubernetes.io/projected/fbe9ea30-54c1-4b75-922a-4c8af95b23b1-kube-api-access-pgwqv\") pod \"ovnkube-node-d826q\" (UID: \"fbe9ea30-54c1-4b75-922a-4c8af95b23b1\") " pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.327717 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:16:57 crc kubenswrapper[4840]: W0129 12:16:57.346302 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbe9ea30_54c1_4b75_922a_4c8af95b23b1.slice/crio-ca434519dc802637176c20b40b79eaa645e6af70829b6bd582c5606cc0573876 WatchSource:0}: Error finding container ca434519dc802637176c20b40b79eaa645e6af70829b6bd582c5606cc0573876: Status 404 returned error can't find the container with id ca434519dc802637176c20b40b79eaa645e6af70829b6bd582c5606cc0573876 Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.825188 4840 generic.go:334] "Generic (PLEG): container finished" podID="fbe9ea30-54c1-4b75-922a-4c8af95b23b1" containerID="3fec5943c4f8106b0edc2e65876ecf907cb4d5eb7b91c49886285ab9dade0f86" exitCode=0 Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.825275 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" event={"ID":"fbe9ea30-54c1-4b75-922a-4c8af95b23b1","Type":"ContainerDied","Data":"3fec5943c4f8106b0edc2e65876ecf907cb4d5eb7b91c49886285ab9dade0f86"} Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.825347 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" event={"ID":"fbe9ea30-54c1-4b75-922a-4c8af95b23b1","Type":"ContainerStarted","Data":"ca434519dc802637176c20b40b79eaa645e6af70829b6bd582c5606cc0573876"} Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.832865 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovn-acl-logging/0.log" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.833414 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-vl4fj_b331ae03-7000-435b-8cb4-65da0c67d876/ovn-controller/0.log" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.834343 4840 generic.go:334] "Generic (PLEG): container finished" podID="b331ae03-7000-435b-8cb4-65da0c67d876" containerID="046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a" exitCode=0 Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.834416 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a"} Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.834423 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.834439 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vl4fj" event={"ID":"b331ae03-7000-435b-8cb4-65da0c67d876","Type":"ContainerDied","Data":"a6aa2824145345222ab8c76d9dc24b521088c36c516af4c38111d2adc8bdd235"} Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.834455 4840 scope.go:117] "RemoveContainer" containerID="3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.837920 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-2zc5r_d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969/kube-multus/2.log" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.837995 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-2zc5r" event={"ID":"d04eb3ab-8c5c-45c3-bf6d-f0e1ff12f969","Type":"ContainerStarted","Data":"7f6aa3aecef44dcc8e68dd92ade766521a7ae780be2bc8cca6ca7659f909e1ad"} Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.864610 4840 scope.go:117] "RemoveContainer" containerID="527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.886319 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vl4fj"] Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.900881 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vl4fj"] Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.920247 4840 scope.go:117] "RemoveContainer" containerID="ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.944019 4840 scope.go:117] "RemoveContainer" containerID="046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.957240 4840 scope.go:117] "RemoveContainer" containerID="7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.972379 4840 scope.go:117] "RemoveContainer" containerID="052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62" Jan 29 12:16:57 crc kubenswrapper[4840]: I0129 12:16:57.986608 4840 scope.go:117] "RemoveContainer" containerID="075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.000983 4840 scope.go:117] "RemoveContainer" containerID="da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.027491 4840 scope.go:117] "RemoveContainer" containerID="5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.056301 4840 scope.go:117] "RemoveContainer" containerID="3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc" Jan 29 12:16:58 crc kubenswrapper[4840]: E0129 12:16:58.056802 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc\": container with ID starting with 3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc not found: ID does not exist" containerID="3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.056847 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc"} err="failed to get container status \"3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc\": rpc error: code = NotFound desc = could not find container \"3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc\": container with ID starting with 3de28a7cebde9c6aa89ba5d8b699ae15f3dffb8e034482884551a19aa8ce30fc not found: ID does not exist" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.056879 4840 scope.go:117] "RemoveContainer" containerID="527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219" Jan 29 12:16:58 crc kubenswrapper[4840]: E0129 12:16:58.057218 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\": container with ID starting with 527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219 not found: ID does not exist" containerID="527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.057251 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219"} err="failed to get container status \"527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\": rpc error: code = NotFound desc = could not find container \"527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219\": container with ID starting with 527874433c354f2591af96957257563748bb8752e2784e1a681431ad68714219 not found: ID does not exist" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.057278 4840 scope.go:117] "RemoveContainer" containerID="ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c" Jan 29 12:16:58 crc kubenswrapper[4840]: E0129 12:16:58.057506 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\": container with ID starting with ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c not found: ID does not exist" containerID="ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.057536 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c"} err="failed to get container status \"ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\": rpc error: code = NotFound desc = could not find container \"ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c\": container with ID starting with ac466fb62326cdaccbb61a13888bd10827715d603665518142048fc9b8ee739c not found: ID does not exist" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.057553 4840 scope.go:117] "RemoveContainer" containerID="046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a" Jan 29 12:16:58 crc kubenswrapper[4840]: E0129 12:16:58.057879 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\": container with ID starting with 046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a not found: ID does not exist" containerID="046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.057900 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a"} err="failed to get container status \"046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\": rpc error: code = NotFound desc = could not find container \"046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a\": container with ID starting with 046270a547183ca1ca84d9563ca4fce79323610f1327a5c5d9830db67976a45a not found: ID does not exist" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.057912 4840 scope.go:117] "RemoveContainer" containerID="7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5" Jan 29 12:16:58 crc kubenswrapper[4840]: E0129 12:16:58.058158 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\": container with ID starting with 7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5 not found: ID does not exist" containerID="7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.058186 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5"} err="failed to get container status \"7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\": rpc error: code = NotFound desc = could not find container \"7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5\": container with ID starting with 7fb2c94ffc9f8eb3496e9507dc88c59598dfcc471c7add4a22696873491f68a5 not found: ID does not exist" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.058200 4840 scope.go:117] "RemoveContainer" containerID="052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62" Jan 29 12:16:58 crc kubenswrapper[4840]: E0129 12:16:58.058423 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\": container with ID starting with 052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62 not found: ID does not exist" containerID="052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.058441 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62"} err="failed to get container status \"052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\": rpc error: code = NotFound desc = could not find container \"052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62\": container with ID starting with 052a62d6060bf62f819dd531daf97f50125720ec43245debba82e2ad6b2a8d62 not found: ID does not exist" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.058454 4840 scope.go:117] "RemoveContainer" containerID="075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c" Jan 29 12:16:58 crc kubenswrapper[4840]: E0129 12:16:58.058713 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\": container with ID starting with 075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c not found: ID does not exist" containerID="075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.058756 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c"} err="failed to get container status \"075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\": rpc error: code = NotFound desc = could not find container \"075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c\": container with ID starting with 075e4e21bf30d22db721ce9833383d5f441054e3812e9739ca8b98223f107f2c not found: ID does not exist" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.058786 4840 scope.go:117] "RemoveContainer" containerID="da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef" Jan 29 12:16:58 crc kubenswrapper[4840]: E0129 12:16:58.059128 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\": container with ID starting with da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef not found: ID does not exist" containerID="da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.059150 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef"} err="failed to get container status \"da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\": rpc error: code = NotFound desc = could not find container \"da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef\": container with ID starting with da6473442b81ef35e2c55c7f5af8e8e431a93b7099d05d736f5c87d7cb0d52ef not found: ID does not exist" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.059164 4840 scope.go:117] "RemoveContainer" containerID="5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20" Jan 29 12:16:58 crc kubenswrapper[4840]: E0129 12:16:58.059360 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\": container with ID starting with 5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20 not found: ID does not exist" containerID="5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.059382 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20"} err="failed to get container status \"5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\": rpc error: code = NotFound desc = could not find container \"5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20\": container with ID starting with 5acbf1b4bdbbcbae319dd91f69430108cedcf8d38e5f449b5d6c88058cacde20 not found: ID does not exist" Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.853571 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" event={"ID":"fbe9ea30-54c1-4b75-922a-4c8af95b23b1","Type":"ContainerStarted","Data":"ef11260c98c05bc4d9da489c3781dc5e8097e64f3437957c150a60e73f937410"} Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.853932 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" event={"ID":"fbe9ea30-54c1-4b75-922a-4c8af95b23b1","Type":"ContainerStarted","Data":"71ca83ef5831afe87786558581bbed4627a49f1bf9216e2d177d7f1974bf3317"} Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.853970 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" event={"ID":"fbe9ea30-54c1-4b75-922a-4c8af95b23b1","Type":"ContainerStarted","Data":"fc5721c7365a7e6b04cd981ab88fa390feb2b2d6e69edea38d32f19484c720e9"} Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.853984 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" event={"ID":"fbe9ea30-54c1-4b75-922a-4c8af95b23b1","Type":"ContainerStarted","Data":"e477cc68468f23acbdf56ca753eebfdb263738747a7300b8252adc2aa81ffe88"} Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.853999 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" event={"ID":"fbe9ea30-54c1-4b75-922a-4c8af95b23b1","Type":"ContainerStarted","Data":"8b89a2ca5435e5e5dfe0c13d64b72be3f15a1357a4eaeafca88e8a6fb5a3bda0"} Jan 29 12:16:58 crc kubenswrapper[4840]: I0129 12:16:58.854015 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" event={"ID":"fbe9ea30-54c1-4b75-922a-4c8af95b23b1","Type":"ContainerStarted","Data":"5c236d677b5e54742520c4a6b4f5212ceb3dd196244469e80cb3b591feb1c53d"} Jan 29 12:16:59 crc kubenswrapper[4840]: I0129 12:16:59.011143 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b331ae03-7000-435b-8cb4-65da0c67d876" path="/var/lib/kubelet/pods/b331ae03-7000-435b-8cb4-65da0c67d876/volumes" Jan 29 12:17:01 crc kubenswrapper[4840]: I0129 12:17:01.880488 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" event={"ID":"fbe9ea30-54c1-4b75-922a-4c8af95b23b1","Type":"ContainerStarted","Data":"88a3c9f6dac13182980af2723ce4bb1bdab57bb72e2b0f2b890f2c2561b7ebae"} Jan 29 12:17:03 crc kubenswrapper[4840]: I0129 12:17:03.899061 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" event={"ID":"fbe9ea30-54c1-4b75-922a-4c8af95b23b1","Type":"ContainerStarted","Data":"17a144e884bf32568b19c335a5e000b0cacd49303a9a60c028494113b6fd7696"} Jan 29 12:17:03 crc kubenswrapper[4840]: I0129 12:17:03.899396 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:17:03 crc kubenswrapper[4840]: I0129 12:17:03.934738 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:17:03 crc kubenswrapper[4840]: I0129 12:17:03.940684 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" podStartSLOduration=7.940667088 podStartE2EDuration="7.940667088s" podCreationTimestamp="2026-01-29 12:16:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:17:03.931523656 +0000 UTC m=+755.594503559" watchObservedRunningTime="2026-01-29 12:17:03.940667088 +0000 UTC m=+755.603646981" Jan 29 12:17:04 crc kubenswrapper[4840]: I0129 12:17:04.904454 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:17:04 crc kubenswrapper[4840]: I0129 12:17:04.904888 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:17:04 crc kubenswrapper[4840]: I0129 12:17:04.968384 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:17:16 crc kubenswrapper[4840]: I0129 12:17:16.822457 4840 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 29 12:17:23 crc kubenswrapper[4840]: I0129 12:17:23.522842 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:17:23 crc kubenswrapper[4840]: I0129 12:17:23.524005 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:17:27 crc kubenswrapper[4840]: I0129 12:17:27.365082 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-d826q" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.165134 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh"] Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.167430 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.172480 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.181158 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh"] Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.336621 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dd390ad-1343-4254-8e83-b36ce6269dd4-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh\" (UID: \"4dd390ad-1343-4254-8e83-b36ce6269dd4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.336783 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dd390ad-1343-4254-8e83-b36ce6269dd4-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh\" (UID: \"4dd390ad-1343-4254-8e83-b36ce6269dd4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.336836 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9txj\" (UniqueName: \"kubernetes.io/projected/4dd390ad-1343-4254-8e83-b36ce6269dd4-kube-api-access-t9txj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh\" (UID: \"4dd390ad-1343-4254-8e83-b36ce6269dd4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.437596 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dd390ad-1343-4254-8e83-b36ce6269dd4-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh\" (UID: \"4dd390ad-1343-4254-8e83-b36ce6269dd4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.437651 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9txj\" (UniqueName: \"kubernetes.io/projected/4dd390ad-1343-4254-8e83-b36ce6269dd4-kube-api-access-t9txj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh\" (UID: \"4dd390ad-1343-4254-8e83-b36ce6269dd4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.437678 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dd390ad-1343-4254-8e83-b36ce6269dd4-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh\" (UID: \"4dd390ad-1343-4254-8e83-b36ce6269dd4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.438238 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dd390ad-1343-4254-8e83-b36ce6269dd4-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh\" (UID: \"4dd390ad-1343-4254-8e83-b36ce6269dd4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.438459 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dd390ad-1343-4254-8e83-b36ce6269dd4-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh\" (UID: \"4dd390ad-1343-4254-8e83-b36ce6269dd4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.459212 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9txj\" (UniqueName: \"kubernetes.io/projected/4dd390ad-1343-4254-8e83-b36ce6269dd4-kube-api-access-t9txj\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh\" (UID: \"4dd390ad-1343-4254-8e83-b36ce6269dd4\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.488256 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:33 crc kubenswrapper[4840]: I0129 12:17:33.730836 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh"] Jan 29 12:17:34 crc kubenswrapper[4840]: I0129 12:17:34.102680 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" event={"ID":"4dd390ad-1343-4254-8e83-b36ce6269dd4","Type":"ContainerStarted","Data":"a0693be35ebe8cbeb9a5320634fe294e6484a80e46364263154e410e1f5893dc"} Jan 29 12:17:34 crc kubenswrapper[4840]: I0129 12:17:34.103146 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" event={"ID":"4dd390ad-1343-4254-8e83-b36ce6269dd4","Type":"ContainerStarted","Data":"2d0beedeb2368632f020514acb66a56274f65c25b8a01d7f8e133b8c6131ac53"} Jan 29 12:17:34 crc kubenswrapper[4840]: I0129 12:17:34.836281 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kbkj6"] Jan 29 12:17:34 crc kubenswrapper[4840]: I0129 12:17:34.840116 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:34 crc kubenswrapper[4840]: I0129 12:17:34.854234 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kbkj6"] Jan 29 12:17:34 crc kubenswrapper[4840]: I0129 12:17:34.966154 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c430082-e255-4a93-9614-be8a46c4fa0a-utilities\") pod \"redhat-operators-kbkj6\" (UID: \"5c430082-e255-4a93-9614-be8a46c4fa0a\") " pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:34 crc kubenswrapper[4840]: I0129 12:17:34.966236 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xq8jr\" (UniqueName: \"kubernetes.io/projected/5c430082-e255-4a93-9614-be8a46c4fa0a-kube-api-access-xq8jr\") pod \"redhat-operators-kbkj6\" (UID: \"5c430082-e255-4a93-9614-be8a46c4fa0a\") " pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:34 crc kubenswrapper[4840]: I0129 12:17:34.966495 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c430082-e255-4a93-9614-be8a46c4fa0a-catalog-content\") pod \"redhat-operators-kbkj6\" (UID: \"5c430082-e255-4a93-9614-be8a46c4fa0a\") " pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:35 crc kubenswrapper[4840]: I0129 12:17:35.068884 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c430082-e255-4a93-9614-be8a46c4fa0a-utilities\") pod \"redhat-operators-kbkj6\" (UID: \"5c430082-e255-4a93-9614-be8a46c4fa0a\") " pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:35 crc kubenswrapper[4840]: I0129 12:17:35.068988 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xq8jr\" (UniqueName: \"kubernetes.io/projected/5c430082-e255-4a93-9614-be8a46c4fa0a-kube-api-access-xq8jr\") pod \"redhat-operators-kbkj6\" (UID: \"5c430082-e255-4a93-9614-be8a46c4fa0a\") " pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:35 crc kubenswrapper[4840]: I0129 12:17:35.069110 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c430082-e255-4a93-9614-be8a46c4fa0a-catalog-content\") pod \"redhat-operators-kbkj6\" (UID: \"5c430082-e255-4a93-9614-be8a46c4fa0a\") " pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:35 crc kubenswrapper[4840]: I0129 12:17:35.070828 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c430082-e255-4a93-9614-be8a46c4fa0a-utilities\") pod \"redhat-operators-kbkj6\" (UID: \"5c430082-e255-4a93-9614-be8a46c4fa0a\") " pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:35 crc kubenswrapper[4840]: I0129 12:17:35.070845 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c430082-e255-4a93-9614-be8a46c4fa0a-catalog-content\") pod \"redhat-operators-kbkj6\" (UID: \"5c430082-e255-4a93-9614-be8a46c4fa0a\") " pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:35 crc kubenswrapper[4840]: I0129 12:17:35.098340 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xq8jr\" (UniqueName: \"kubernetes.io/projected/5c430082-e255-4a93-9614-be8a46c4fa0a-kube-api-access-xq8jr\") pod \"redhat-operators-kbkj6\" (UID: \"5c430082-e255-4a93-9614-be8a46c4fa0a\") " pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:35 crc kubenswrapper[4840]: I0129 12:17:35.123709 4840 generic.go:334] "Generic (PLEG): container finished" podID="4dd390ad-1343-4254-8e83-b36ce6269dd4" containerID="a0693be35ebe8cbeb9a5320634fe294e6484a80e46364263154e410e1f5893dc" exitCode=0 Jan 29 12:17:35 crc kubenswrapper[4840]: I0129 12:17:35.123777 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" event={"ID":"4dd390ad-1343-4254-8e83-b36ce6269dd4","Type":"ContainerDied","Data":"a0693be35ebe8cbeb9a5320634fe294e6484a80e46364263154e410e1f5893dc"} Jan 29 12:17:35 crc kubenswrapper[4840]: I0129 12:17:35.168635 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:35 crc kubenswrapper[4840]: I0129 12:17:35.381218 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kbkj6"] Jan 29 12:17:36 crc kubenswrapper[4840]: I0129 12:17:36.131657 4840 generic.go:334] "Generic (PLEG): container finished" podID="5c430082-e255-4a93-9614-be8a46c4fa0a" containerID="ddf03e630747ca6e37ff87598747c8c8362c76bb3b4af69e1eebc703c56b5509" exitCode=0 Jan 29 12:17:36 crc kubenswrapper[4840]: I0129 12:17:36.131737 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbkj6" event={"ID":"5c430082-e255-4a93-9614-be8a46c4fa0a","Type":"ContainerDied","Data":"ddf03e630747ca6e37ff87598747c8c8362c76bb3b4af69e1eebc703c56b5509"} Jan 29 12:17:36 crc kubenswrapper[4840]: I0129 12:17:36.131823 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbkj6" event={"ID":"5c430082-e255-4a93-9614-be8a46c4fa0a","Type":"ContainerStarted","Data":"f981f8f1cbfbecbf391a31135e62d31b8ddfdc9ff849a4ffb89a7a5fea532688"} Jan 29 12:17:37 crc kubenswrapper[4840]: I0129 12:17:37.140080 4840 generic.go:334] "Generic (PLEG): container finished" podID="4dd390ad-1343-4254-8e83-b36ce6269dd4" containerID="29d838dadb26066d111b30dc3bc4f1c9afa106312e6ab9578a7c56a49cb6f73a" exitCode=0 Jan 29 12:17:37 crc kubenswrapper[4840]: I0129 12:17:37.140158 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" event={"ID":"4dd390ad-1343-4254-8e83-b36ce6269dd4","Type":"ContainerDied","Data":"29d838dadb26066d111b30dc3bc4f1c9afa106312e6ab9578a7c56a49cb6f73a"} Jan 29 12:17:37 crc kubenswrapper[4840]: I0129 12:17:37.143565 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbkj6" event={"ID":"5c430082-e255-4a93-9614-be8a46c4fa0a","Type":"ContainerStarted","Data":"b16835e63f1f4184efa5c981c1c2178402b9fbd45e597a32fba805e4eb519c7d"} Jan 29 12:17:38 crc kubenswrapper[4840]: I0129 12:17:38.153869 4840 generic.go:334] "Generic (PLEG): container finished" podID="5c430082-e255-4a93-9614-be8a46c4fa0a" containerID="b16835e63f1f4184efa5c981c1c2178402b9fbd45e597a32fba805e4eb519c7d" exitCode=0 Jan 29 12:17:38 crc kubenswrapper[4840]: I0129 12:17:38.154010 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbkj6" event={"ID":"5c430082-e255-4a93-9614-be8a46c4fa0a","Type":"ContainerDied","Data":"b16835e63f1f4184efa5c981c1c2178402b9fbd45e597a32fba805e4eb519c7d"} Jan 29 12:17:38 crc kubenswrapper[4840]: I0129 12:17:38.158790 4840 generic.go:334] "Generic (PLEG): container finished" podID="4dd390ad-1343-4254-8e83-b36ce6269dd4" containerID="11efba20b6bebcbf6f836a18b200b60d123a5b15b229034046d1eff686fbf20d" exitCode=0 Jan 29 12:17:38 crc kubenswrapper[4840]: I0129 12:17:38.159512 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" event={"ID":"4dd390ad-1343-4254-8e83-b36ce6269dd4","Type":"ContainerDied","Data":"11efba20b6bebcbf6f836a18b200b60d123a5b15b229034046d1eff686fbf20d"} Jan 29 12:17:39 crc kubenswrapper[4840]: I0129 12:17:39.169418 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbkj6" event={"ID":"5c430082-e255-4a93-9614-be8a46c4fa0a","Type":"ContainerStarted","Data":"b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c"} Jan 29 12:17:39 crc kubenswrapper[4840]: I0129 12:17:39.195400 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kbkj6" podStartSLOduration=2.745077189 podStartE2EDuration="5.195363798s" podCreationTimestamp="2026-01-29 12:17:34 +0000 UTC" firstStartedPulling="2026-01-29 12:17:36.134929441 +0000 UTC m=+787.797909334" lastFinishedPulling="2026-01-29 12:17:38.58521604 +0000 UTC m=+790.248195943" observedRunningTime="2026-01-29 12:17:39.190776338 +0000 UTC m=+790.853756311" watchObservedRunningTime="2026-01-29 12:17:39.195363798 +0000 UTC m=+790.858343751" Jan 29 12:17:39 crc kubenswrapper[4840]: I0129 12:17:39.454343 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:39 crc kubenswrapper[4840]: I0129 12:17:39.635113 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dd390ad-1343-4254-8e83-b36ce6269dd4-util\") pod \"4dd390ad-1343-4254-8e83-b36ce6269dd4\" (UID: \"4dd390ad-1343-4254-8e83-b36ce6269dd4\") " Jan 29 12:17:39 crc kubenswrapper[4840]: I0129 12:17:39.635361 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9txj\" (UniqueName: \"kubernetes.io/projected/4dd390ad-1343-4254-8e83-b36ce6269dd4-kube-api-access-t9txj\") pod \"4dd390ad-1343-4254-8e83-b36ce6269dd4\" (UID: \"4dd390ad-1343-4254-8e83-b36ce6269dd4\") " Jan 29 12:17:39 crc kubenswrapper[4840]: I0129 12:17:39.636478 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dd390ad-1343-4254-8e83-b36ce6269dd4-bundle\") pod \"4dd390ad-1343-4254-8e83-b36ce6269dd4\" (UID: \"4dd390ad-1343-4254-8e83-b36ce6269dd4\") " Jan 29 12:17:39 crc kubenswrapper[4840]: I0129 12:17:39.637117 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dd390ad-1343-4254-8e83-b36ce6269dd4-bundle" (OuterVolumeSpecName: "bundle") pod "4dd390ad-1343-4254-8e83-b36ce6269dd4" (UID: "4dd390ad-1343-4254-8e83-b36ce6269dd4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:17:39 crc kubenswrapper[4840]: I0129 12:17:39.645538 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dd390ad-1343-4254-8e83-b36ce6269dd4-kube-api-access-t9txj" (OuterVolumeSpecName: "kube-api-access-t9txj") pod "4dd390ad-1343-4254-8e83-b36ce6269dd4" (UID: "4dd390ad-1343-4254-8e83-b36ce6269dd4"). InnerVolumeSpecName "kube-api-access-t9txj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:17:39 crc kubenswrapper[4840]: I0129 12:17:39.688972 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dd390ad-1343-4254-8e83-b36ce6269dd4-util" (OuterVolumeSpecName: "util") pod "4dd390ad-1343-4254-8e83-b36ce6269dd4" (UID: "4dd390ad-1343-4254-8e83-b36ce6269dd4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:17:39 crc kubenswrapper[4840]: I0129 12:17:39.739017 4840 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/4dd390ad-1343-4254-8e83-b36ce6269dd4-util\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:39 crc kubenswrapper[4840]: I0129 12:17:39.739116 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9txj\" (UniqueName: \"kubernetes.io/projected/4dd390ad-1343-4254-8e83-b36ce6269dd4-kube-api-access-t9txj\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:39 crc kubenswrapper[4840]: I0129 12:17:39.739135 4840 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/4dd390ad-1343-4254-8e83-b36ce6269dd4-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:40 crc kubenswrapper[4840]: I0129 12:17:40.176934 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" event={"ID":"4dd390ad-1343-4254-8e83-b36ce6269dd4","Type":"ContainerDied","Data":"2d0beedeb2368632f020514acb66a56274f65c25b8a01d7f8e133b8c6131ac53"} Jan 29 12:17:40 crc kubenswrapper[4840]: I0129 12:17:40.177009 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d0beedeb2368632f020514acb66a56274f65c25b8a01d7f8e133b8c6131ac53" Jan 29 12:17:40 crc kubenswrapper[4840]: I0129 12:17:40.177203 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh" Jan 29 12:17:43 crc kubenswrapper[4840]: I0129 12:17:43.765336 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-qp4cd"] Jan 29 12:17:43 crc kubenswrapper[4840]: E0129 12:17:43.765890 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd390ad-1343-4254-8e83-b36ce6269dd4" containerName="pull" Jan 29 12:17:43 crc kubenswrapper[4840]: I0129 12:17:43.765904 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd390ad-1343-4254-8e83-b36ce6269dd4" containerName="pull" Jan 29 12:17:43 crc kubenswrapper[4840]: E0129 12:17:43.765915 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd390ad-1343-4254-8e83-b36ce6269dd4" containerName="extract" Jan 29 12:17:43 crc kubenswrapper[4840]: I0129 12:17:43.765923 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd390ad-1343-4254-8e83-b36ce6269dd4" containerName="extract" Jan 29 12:17:43 crc kubenswrapper[4840]: E0129 12:17:43.765961 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dd390ad-1343-4254-8e83-b36ce6269dd4" containerName="util" Jan 29 12:17:43 crc kubenswrapper[4840]: I0129 12:17:43.765971 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dd390ad-1343-4254-8e83-b36ce6269dd4" containerName="util" Jan 29 12:17:43 crc kubenswrapper[4840]: I0129 12:17:43.766123 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dd390ad-1343-4254-8e83-b36ce6269dd4" containerName="extract" Jan 29 12:17:43 crc kubenswrapper[4840]: I0129 12:17:43.766608 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-qp4cd" Jan 29 12:17:43 crc kubenswrapper[4840]: I0129 12:17:43.769430 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 29 12:17:43 crc kubenswrapper[4840]: I0129 12:17:43.769521 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lbscj" Jan 29 12:17:43 crc kubenswrapper[4840]: I0129 12:17:43.775879 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-qp4cd"] Jan 29 12:17:43 crc kubenswrapper[4840]: I0129 12:17:43.777978 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 29 12:17:43 crc kubenswrapper[4840]: I0129 12:17:43.895740 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bknx\" (UniqueName: \"kubernetes.io/projected/31770d8e-79ba-43a3-81bd-d76b310b6acc-kube-api-access-9bknx\") pod \"nmstate-operator-646758c888-qp4cd\" (UID: \"31770d8e-79ba-43a3-81bd-d76b310b6acc\") " pod="openshift-nmstate/nmstate-operator-646758c888-qp4cd" Jan 29 12:17:43 crc kubenswrapper[4840]: I0129 12:17:43.997136 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bknx\" (UniqueName: \"kubernetes.io/projected/31770d8e-79ba-43a3-81bd-d76b310b6acc-kube-api-access-9bknx\") pod \"nmstate-operator-646758c888-qp4cd\" (UID: \"31770d8e-79ba-43a3-81bd-d76b310b6acc\") " pod="openshift-nmstate/nmstate-operator-646758c888-qp4cd" Jan 29 12:17:44 crc kubenswrapper[4840]: I0129 12:17:44.019619 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bknx\" (UniqueName: \"kubernetes.io/projected/31770d8e-79ba-43a3-81bd-d76b310b6acc-kube-api-access-9bknx\") pod \"nmstate-operator-646758c888-qp4cd\" (UID: \"31770d8e-79ba-43a3-81bd-d76b310b6acc\") " pod="openshift-nmstate/nmstate-operator-646758c888-qp4cd" Jan 29 12:17:44 crc kubenswrapper[4840]: I0129 12:17:44.082249 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-qp4cd" Jan 29 12:17:44 crc kubenswrapper[4840]: I0129 12:17:44.313603 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-qp4cd"] Jan 29 12:17:45 crc kubenswrapper[4840]: I0129 12:17:45.168859 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:45 crc kubenswrapper[4840]: I0129 12:17:45.169155 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:45 crc kubenswrapper[4840]: I0129 12:17:45.204335 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:45 crc kubenswrapper[4840]: I0129 12:17:45.208143 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-qp4cd" event={"ID":"31770d8e-79ba-43a3-81bd-d76b310b6acc","Type":"ContainerStarted","Data":"b76b28978f5d7124d1939208298f44917f6c72d70aac4d4d1d2e9cccbf0ab5f7"} Jan 29 12:17:45 crc kubenswrapper[4840]: I0129 12:17:45.261322 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:47 crc kubenswrapper[4840]: I0129 12:17:47.610071 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kbkj6"] Jan 29 12:17:47 crc kubenswrapper[4840]: I0129 12:17:47.610911 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kbkj6" podUID="5c430082-e255-4a93-9614-be8a46c4fa0a" containerName="registry-server" containerID="cri-o://b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c" gracePeriod=2 Jan 29 12:17:48 crc kubenswrapper[4840]: I0129 12:17:48.225652 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-qp4cd" event={"ID":"31770d8e-79ba-43a3-81bd-d76b310b6acc","Type":"ContainerStarted","Data":"5a00709c26dd835878aeebe673d2c34c97aa0f067e35dfceffdb37427a1dd6a4"} Jan 29 12:17:48 crc kubenswrapper[4840]: I0129 12:17:48.241493 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-qp4cd" podStartSLOduration=2.132353117 podStartE2EDuration="5.241475385s" podCreationTimestamp="2026-01-29 12:17:43 +0000 UTC" firstStartedPulling="2026-01-29 12:17:44.321026477 +0000 UTC m=+795.984006370" lastFinishedPulling="2026-01-29 12:17:47.430148745 +0000 UTC m=+799.093128638" observedRunningTime="2026-01-29 12:17:48.240404687 +0000 UTC m=+799.903384590" watchObservedRunningTime="2026-01-29 12:17:48.241475385 +0000 UTC m=+799.904455278" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.102343 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.185547 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c430082-e255-4a93-9614-be8a46c4fa0a-utilities\") pod \"5c430082-e255-4a93-9614-be8a46c4fa0a\" (UID: \"5c430082-e255-4a93-9614-be8a46c4fa0a\") " Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.185631 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xq8jr\" (UniqueName: \"kubernetes.io/projected/5c430082-e255-4a93-9614-be8a46c4fa0a-kube-api-access-xq8jr\") pod \"5c430082-e255-4a93-9614-be8a46c4fa0a\" (UID: \"5c430082-e255-4a93-9614-be8a46c4fa0a\") " Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.185810 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c430082-e255-4a93-9614-be8a46c4fa0a-catalog-content\") pod \"5c430082-e255-4a93-9614-be8a46c4fa0a\" (UID: \"5c430082-e255-4a93-9614-be8a46c4fa0a\") " Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.186580 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c430082-e255-4a93-9614-be8a46c4fa0a-utilities" (OuterVolumeSpecName: "utilities") pod "5c430082-e255-4a93-9614-be8a46c4fa0a" (UID: "5c430082-e255-4a93-9614-be8a46c4fa0a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.192051 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c430082-e255-4a93-9614-be8a46c4fa0a-kube-api-access-xq8jr" (OuterVolumeSpecName: "kube-api-access-xq8jr") pod "5c430082-e255-4a93-9614-be8a46c4fa0a" (UID: "5c430082-e255-4a93-9614-be8a46c4fa0a"). InnerVolumeSpecName "kube-api-access-xq8jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.235650 4840 generic.go:334] "Generic (PLEG): container finished" podID="5c430082-e255-4a93-9614-be8a46c4fa0a" containerID="b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c" exitCode=0 Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.235695 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbkj6" event={"ID":"5c430082-e255-4a93-9614-be8a46c4fa0a","Type":"ContainerDied","Data":"b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c"} Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.235729 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kbkj6" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.235750 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kbkj6" event={"ID":"5c430082-e255-4a93-9614-be8a46c4fa0a","Type":"ContainerDied","Data":"f981f8f1cbfbecbf391a31135e62d31b8ddfdc9ff849a4ffb89a7a5fea532688"} Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.235769 4840 scope.go:117] "RemoveContainer" containerID="b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.260060 4840 scope.go:117] "RemoveContainer" containerID="b16835e63f1f4184efa5c981c1c2178402b9fbd45e597a32fba805e4eb519c7d" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.276774 4840 scope.go:117] "RemoveContainer" containerID="ddf03e630747ca6e37ff87598747c8c8362c76bb3b4af69e1eebc703c56b5509" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.287573 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c430082-e255-4a93-9614-be8a46c4fa0a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.287605 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xq8jr\" (UniqueName: \"kubernetes.io/projected/5c430082-e255-4a93-9614-be8a46c4fa0a-kube-api-access-xq8jr\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.292754 4840 scope.go:117] "RemoveContainer" containerID="b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c" Jan 29 12:17:49 crc kubenswrapper[4840]: E0129 12:17:49.293153 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c\": container with ID starting with b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c not found: ID does not exist" containerID="b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.293247 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c"} err="failed to get container status \"b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c\": rpc error: code = NotFound desc = could not find container \"b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c\": container with ID starting with b1c9e8e8d7376c2020cc898241cc679ff130f42e8bddb543475b83ba8e99821c not found: ID does not exist" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.293325 4840 scope.go:117] "RemoveContainer" containerID="b16835e63f1f4184efa5c981c1c2178402b9fbd45e597a32fba805e4eb519c7d" Jan 29 12:17:49 crc kubenswrapper[4840]: E0129 12:17:49.293710 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b16835e63f1f4184efa5c981c1c2178402b9fbd45e597a32fba805e4eb519c7d\": container with ID starting with b16835e63f1f4184efa5c981c1c2178402b9fbd45e597a32fba805e4eb519c7d not found: ID does not exist" containerID="b16835e63f1f4184efa5c981c1c2178402b9fbd45e597a32fba805e4eb519c7d" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.293737 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b16835e63f1f4184efa5c981c1c2178402b9fbd45e597a32fba805e4eb519c7d"} err="failed to get container status \"b16835e63f1f4184efa5c981c1c2178402b9fbd45e597a32fba805e4eb519c7d\": rpc error: code = NotFound desc = could not find container \"b16835e63f1f4184efa5c981c1c2178402b9fbd45e597a32fba805e4eb519c7d\": container with ID starting with b16835e63f1f4184efa5c981c1c2178402b9fbd45e597a32fba805e4eb519c7d not found: ID does not exist" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.293762 4840 scope.go:117] "RemoveContainer" containerID="ddf03e630747ca6e37ff87598747c8c8362c76bb3b4af69e1eebc703c56b5509" Jan 29 12:17:49 crc kubenswrapper[4840]: E0129 12:17:49.294260 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddf03e630747ca6e37ff87598747c8c8362c76bb3b4af69e1eebc703c56b5509\": container with ID starting with ddf03e630747ca6e37ff87598747c8c8362c76bb3b4af69e1eebc703c56b5509 not found: ID does not exist" containerID="ddf03e630747ca6e37ff87598747c8c8362c76bb3b4af69e1eebc703c56b5509" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.294341 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddf03e630747ca6e37ff87598747c8c8362c76bb3b4af69e1eebc703c56b5509"} err="failed to get container status \"ddf03e630747ca6e37ff87598747c8c8362c76bb3b4af69e1eebc703c56b5509\": rpc error: code = NotFound desc = could not find container \"ddf03e630747ca6e37ff87598747c8c8362c76bb3b4af69e1eebc703c56b5509\": container with ID starting with ddf03e630747ca6e37ff87598747c8c8362c76bb3b4af69e1eebc703c56b5509 not found: ID does not exist" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.319291 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c430082-e255-4a93-9614-be8a46c4fa0a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c430082-e255-4a93-9614-be8a46c4fa0a" (UID: "5c430082-e255-4a93-9614-be8a46c4fa0a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.388336 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c430082-e255-4a93-9614-be8a46c4fa0a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.570324 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kbkj6"] Jan 29 12:17:49 crc kubenswrapper[4840]: I0129 12:17:49.574841 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kbkj6"] Jan 29 12:17:51 crc kubenswrapper[4840]: I0129 12:17:51.012534 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c430082-e255-4a93-9614-be8a46c4fa0a" path="/var/lib/kubelet/pods/5c430082-e255-4a93-9614-be8a46c4fa0a/volumes" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.439795 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-htgbx"] Jan 29 12:17:53 crc kubenswrapper[4840]: E0129 12:17:53.440088 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c430082-e255-4a93-9614-be8a46c4fa0a" containerName="extract-utilities" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.440105 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c430082-e255-4a93-9614-be8a46c4fa0a" containerName="extract-utilities" Jan 29 12:17:53 crc kubenswrapper[4840]: E0129 12:17:53.440120 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c430082-e255-4a93-9614-be8a46c4fa0a" containerName="extract-content" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.440130 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c430082-e255-4a93-9614-be8a46c4fa0a" containerName="extract-content" Jan 29 12:17:53 crc kubenswrapper[4840]: E0129 12:17:53.440145 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c430082-e255-4a93-9614-be8a46c4fa0a" containerName="registry-server" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.440154 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c430082-e255-4a93-9614-be8a46c4fa0a" containerName="registry-server" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.440246 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c430082-e255-4a93-9614-be8a46c4fa0a" containerName="registry-server" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.440908 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-htgbx" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.443018 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xz4gv" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.457656 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4"] Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.459025 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.461520 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.464366 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-htgbx"] Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.470607 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcpl9\" (UniqueName: \"kubernetes.io/projected/bab7acdd-5080-4d9f-9889-81c2a88f0dc7-kube-api-access-tcpl9\") pod \"nmstate-metrics-54757c584b-htgbx\" (UID: \"bab7acdd-5080-4d9f-9889-81c2a88f0dc7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-htgbx" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.480048 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-g88ch"] Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.480925 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.495571 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4"] Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.521890 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.521970 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.522024 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.522643 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5eea126e892ef85191aa623b40e4a1a85f8aad602ec61c374067a4fa5bc21cd1"} pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.522711 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" containerID="cri-o://5eea126e892ef85191aa623b40e4a1a85f8aad602ec61c374067a4fa5bc21cd1" gracePeriod=600 Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.572626 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1d956b0e-870f-4713-af30-f1726121d630-nmstate-lock\") pod \"nmstate-handler-g88ch\" (UID: \"1d956b0e-870f-4713-af30-f1726121d630\") " pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.572688 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7ncw\" (UniqueName: \"kubernetes.io/projected/3a8ff3f5-bc33-4d34-8a3d-9947121b510d-kube-api-access-k7ncw\") pod \"nmstate-webhook-8474b5b9d8-bq4k4\" (UID: \"3a8ff3f5-bc33-4d34-8a3d-9947121b510d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.573048 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3a8ff3f5-bc33-4d34-8a3d-9947121b510d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bq4k4\" (UID: \"3a8ff3f5-bc33-4d34-8a3d-9947121b510d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.573255 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tcpl9\" (UniqueName: \"kubernetes.io/projected/bab7acdd-5080-4d9f-9889-81c2a88f0dc7-kube-api-access-tcpl9\") pod \"nmstate-metrics-54757c584b-htgbx\" (UID: \"bab7acdd-5080-4d9f-9889-81c2a88f0dc7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-htgbx" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.573323 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1d956b0e-870f-4713-af30-f1726121d630-dbus-socket\") pod \"nmstate-handler-g88ch\" (UID: \"1d956b0e-870f-4713-af30-f1726121d630\") " pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.573445 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7mww\" (UniqueName: \"kubernetes.io/projected/1d956b0e-870f-4713-af30-f1726121d630-kube-api-access-f7mww\") pod \"nmstate-handler-g88ch\" (UID: \"1d956b0e-870f-4713-af30-f1726121d630\") " pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.573511 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1d956b0e-870f-4713-af30-f1726121d630-ovs-socket\") pod \"nmstate-handler-g88ch\" (UID: \"1d956b0e-870f-4713-af30-f1726121d630\") " pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.595554 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tcpl9\" (UniqueName: \"kubernetes.io/projected/bab7acdd-5080-4d9f-9889-81c2a88f0dc7-kube-api-access-tcpl9\") pod \"nmstate-metrics-54757c584b-htgbx\" (UID: \"bab7acdd-5080-4d9f-9889-81c2a88f0dc7\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-htgbx" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.606332 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6"] Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.607012 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.611391 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.611416 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.611419 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-2npfp" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.617052 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6"] Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.675603 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3a8ff3f5-bc33-4d34-8a3d-9947121b510d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bq4k4\" (UID: \"3a8ff3f5-bc33-4d34-8a3d-9947121b510d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.675865 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/676e6927-6277-4bf5-bb70-7b1142c3ff01-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-45hj6\" (UID: \"676e6927-6277-4bf5-bb70-7b1142c3ff01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.676022 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1d956b0e-870f-4713-af30-f1726121d630-dbus-socket\") pod \"nmstate-handler-g88ch\" (UID: \"1d956b0e-870f-4713-af30-f1726121d630\") " pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.676543 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7mww\" (UniqueName: \"kubernetes.io/projected/1d956b0e-870f-4713-af30-f1726121d630-kube-api-access-f7mww\") pod \"nmstate-handler-g88ch\" (UID: \"1d956b0e-870f-4713-af30-f1726121d630\") " pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.677015 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnccg\" (UniqueName: \"kubernetes.io/projected/676e6927-6277-4bf5-bb70-7b1142c3ff01-kube-api-access-dnccg\") pod \"nmstate-console-plugin-7754f76f8b-45hj6\" (UID: \"676e6927-6277-4bf5-bb70-7b1142c3ff01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.677139 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1d956b0e-870f-4713-af30-f1726121d630-ovs-socket\") pod \"nmstate-handler-g88ch\" (UID: \"1d956b0e-870f-4713-af30-f1726121d630\") " pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.677270 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/676e6927-6277-4bf5-bb70-7b1142c3ff01-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-45hj6\" (UID: \"676e6927-6277-4bf5-bb70-7b1142c3ff01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.677383 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1d956b0e-870f-4713-af30-f1726121d630-nmstate-lock\") pod \"nmstate-handler-g88ch\" (UID: \"1d956b0e-870f-4713-af30-f1726121d630\") " pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.677516 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7ncw\" (UniqueName: \"kubernetes.io/projected/3a8ff3f5-bc33-4d34-8a3d-9947121b510d-kube-api-access-k7ncw\") pod \"nmstate-webhook-8474b5b9d8-bq4k4\" (UID: \"3a8ff3f5-bc33-4d34-8a3d-9947121b510d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.676504 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1d956b0e-870f-4713-af30-f1726121d630-dbus-socket\") pod \"nmstate-handler-g88ch\" (UID: \"1d956b0e-870f-4713-af30-f1726121d630\") " pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: E0129 12:17:53.675767 4840 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 29 12:17:53 crc kubenswrapper[4840]: E0129 12:17:53.678047 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3a8ff3f5-bc33-4d34-8a3d-9947121b510d-tls-key-pair podName:3a8ff3f5-bc33-4d34-8a3d-9947121b510d nodeName:}" failed. No retries permitted until 2026-01-29 12:17:54.178016331 +0000 UTC m=+805.840996224 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/3a8ff3f5-bc33-4d34-8a3d-9947121b510d-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-bq4k4" (UID: "3a8ff3f5-bc33-4d34-8a3d-9947121b510d") : secret "openshift-nmstate-webhook" not found Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.677241 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1d956b0e-870f-4713-af30-f1726121d630-ovs-socket\") pod \"nmstate-handler-g88ch\" (UID: \"1d956b0e-870f-4713-af30-f1726121d630\") " pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.677473 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1d956b0e-870f-4713-af30-f1726121d630-nmstate-lock\") pod \"nmstate-handler-g88ch\" (UID: \"1d956b0e-870f-4713-af30-f1726121d630\") " pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.695552 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7mww\" (UniqueName: \"kubernetes.io/projected/1d956b0e-870f-4713-af30-f1726121d630-kube-api-access-f7mww\") pod \"nmstate-handler-g88ch\" (UID: \"1d956b0e-870f-4713-af30-f1726121d630\") " pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.696321 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7ncw\" (UniqueName: \"kubernetes.io/projected/3a8ff3f5-bc33-4d34-8a3d-9947121b510d-kube-api-access-k7ncw\") pod \"nmstate-webhook-8474b5b9d8-bq4k4\" (UID: \"3a8ff3f5-bc33-4d34-8a3d-9947121b510d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.763388 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-htgbx" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.778656 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/676e6927-6277-4bf5-bb70-7b1142c3ff01-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-45hj6\" (UID: \"676e6927-6277-4bf5-bb70-7b1142c3ff01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.779045 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnccg\" (UniqueName: \"kubernetes.io/projected/676e6927-6277-4bf5-bb70-7b1142c3ff01-kube-api-access-dnccg\") pod \"nmstate-console-plugin-7754f76f8b-45hj6\" (UID: \"676e6927-6277-4bf5-bb70-7b1142c3ff01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" Jan 29 12:17:53 crc kubenswrapper[4840]: E0129 12:17:53.779123 4840 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.779158 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/676e6927-6277-4bf5-bb70-7b1142c3ff01-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-45hj6\" (UID: \"676e6927-6277-4bf5-bb70-7b1142c3ff01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" Jan 29 12:17:53 crc kubenswrapper[4840]: E0129 12:17:53.779204 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/676e6927-6277-4bf5-bb70-7b1142c3ff01-plugin-serving-cert podName:676e6927-6277-4bf5-bb70-7b1142c3ff01 nodeName:}" failed. No retries permitted until 2026-01-29 12:17:54.279184291 +0000 UTC m=+805.942164254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/676e6927-6277-4bf5-bb70-7b1142c3ff01-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-45hj6" (UID: "676e6927-6277-4bf5-bb70-7b1142c3ff01") : secret "plugin-serving-cert" not found Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.780354 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/676e6927-6277-4bf5-bb70-7b1142c3ff01-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-45hj6\" (UID: \"676e6927-6277-4bf5-bb70-7b1142c3ff01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.802146 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnccg\" (UniqueName: \"kubernetes.io/projected/676e6927-6277-4bf5-bb70-7b1142c3ff01-kube-api-access-dnccg\") pod \"nmstate-console-plugin-7754f76f8b-45hj6\" (UID: \"676e6927-6277-4bf5-bb70-7b1142c3ff01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.823711 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6d67db987b-czh6b"] Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.824465 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.832144 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.850410 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d67db987b-czh6b"] Jan 29 12:17:53 crc kubenswrapper[4840]: W0129 12:17:53.872698 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d956b0e_870f_4713_af30_f1726121d630.slice/crio-3efe231af151124217f3198890058e31a5ac4b82a612ae505533705d159abcd3 WatchSource:0}: Error finding container 3efe231af151124217f3198890058e31a5ac4b82a612ae505533705d159abcd3: Status 404 returned error can't find the container with id 3efe231af151124217f3198890058e31a5ac4b82a612ae505533705d159abcd3 Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.879959 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79216eef-9bac-48d7-b928-180f45e49d15-service-ca\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.880007 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79216eef-9bac-48d7-b928-180f45e49d15-console-oauth-config\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.880065 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79216eef-9bac-48d7-b928-180f45e49d15-console-config\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.880109 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79216eef-9bac-48d7-b928-180f45e49d15-trusted-ca-bundle\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.880133 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79216eef-9bac-48d7-b928-180f45e49d15-oauth-serving-cert\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.880159 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79216eef-9bac-48d7-b928-180f45e49d15-console-serving-cert\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.880192 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpf55\" (UniqueName: \"kubernetes.io/projected/79216eef-9bac-48d7-b928-180f45e49d15-kube-api-access-kpf55\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.981919 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79216eef-9bac-48d7-b928-180f45e49d15-console-oauth-config\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.982266 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79216eef-9bac-48d7-b928-180f45e49d15-console-config\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.982295 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79216eef-9bac-48d7-b928-180f45e49d15-trusted-ca-bundle\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.982315 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79216eef-9bac-48d7-b928-180f45e49d15-oauth-serving-cert\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.982355 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79216eef-9bac-48d7-b928-180f45e49d15-console-serving-cert\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.982386 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpf55\" (UniqueName: \"kubernetes.io/projected/79216eef-9bac-48d7-b928-180f45e49d15-kube-api-access-kpf55\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.982428 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79216eef-9bac-48d7-b928-180f45e49d15-service-ca\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.983244 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/79216eef-9bac-48d7-b928-180f45e49d15-service-ca\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.983935 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/79216eef-9bac-48d7-b928-180f45e49d15-oauth-serving-cert\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.984541 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/79216eef-9bac-48d7-b928-180f45e49d15-console-config\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.985854 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79216eef-9bac-48d7-b928-180f45e49d15-trusted-ca-bundle\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.989529 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/79216eef-9bac-48d7-b928-180f45e49d15-console-oauth-config\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:53 crc kubenswrapper[4840]: I0129 12:17:53.989646 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/79216eef-9bac-48d7-b928-180f45e49d15-console-serving-cert\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.000693 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpf55\" (UniqueName: \"kubernetes.io/projected/79216eef-9bac-48d7-b928-180f45e49d15-kube-api-access-kpf55\") pod \"console-6d67db987b-czh6b\" (UID: \"79216eef-9bac-48d7-b928-180f45e49d15\") " pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.029782 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-htgbx"] Jan 29 12:17:54 crc kubenswrapper[4840]: W0129 12:17:54.041415 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbab7acdd_5080_4d9f_9889_81c2a88f0dc7.slice/crio-d3aaf30c9240181b6b32a2066b39b425532e5ad671f66099be8a7476c4b4245a WatchSource:0}: Error finding container d3aaf30c9240181b6b32a2066b39b425532e5ad671f66099be8a7476c4b4245a: Status 404 returned error can't find the container with id d3aaf30c9240181b6b32a2066b39b425532e5ad671f66099be8a7476c4b4245a Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.145102 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.186112 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3a8ff3f5-bc33-4d34-8a3d-9947121b510d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bq4k4\" (UID: \"3a8ff3f5-bc33-4d34-8a3d-9947121b510d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.190390 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/3a8ff3f5-bc33-4d34-8a3d-9947121b510d-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-bq4k4\" (UID: \"3a8ff3f5-bc33-4d34-8a3d-9947121b510d\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.272632 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerID="5eea126e892ef85191aa623b40e4a1a85f8aad602ec61c374067a4fa5bc21cd1" exitCode=0 Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.273367 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerDied","Data":"5eea126e892ef85191aa623b40e4a1a85f8aad602ec61c374067a4fa5bc21cd1"} Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.273429 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"7baefb030f501ff291c20d387cd99f80bc3c294136bd6bcef30704b577ae25fb"} Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.273451 4840 scope.go:117] "RemoveContainer" containerID="e10267b8e4133aafc62b40dd54cb6e7f2b7971650f1dabb0aab935dd2a060868" Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.277317 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-g88ch" event={"ID":"1d956b0e-870f-4713-af30-f1726121d630","Type":"ContainerStarted","Data":"3efe231af151124217f3198890058e31a5ac4b82a612ae505533705d159abcd3"} Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.280616 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-htgbx" event={"ID":"bab7acdd-5080-4d9f-9889-81c2a88f0dc7","Type":"ContainerStarted","Data":"d3aaf30c9240181b6b32a2066b39b425532e5ad671f66099be8a7476c4b4245a"} Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.288213 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/676e6927-6277-4bf5-bb70-7b1142c3ff01-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-45hj6\" (UID: \"676e6927-6277-4bf5-bb70-7b1142c3ff01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.294499 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/676e6927-6277-4bf5-bb70-7b1142c3ff01-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-45hj6\" (UID: \"676e6927-6277-4bf5-bb70-7b1142c3ff01\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.342464 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6d67db987b-czh6b"] Jan 29 12:17:54 crc kubenswrapper[4840]: W0129 12:17:54.350933 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79216eef_9bac_48d7_b928_180f45e49d15.slice/crio-898019617cafd6220376ed37b7563bb40f776d8f1d9b15b8ef324b16bd644edf WatchSource:0}: Error finding container 898019617cafd6220376ed37b7563bb40f776d8f1d9b15b8ef324b16bd644edf: Status 404 returned error can't find the container with id 898019617cafd6220376ed37b7563bb40f776d8f1d9b15b8ef324b16bd644edf Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.380545 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.558438 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.781587 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4"] Jan 29 12:17:54 crc kubenswrapper[4840]: I0129 12:17:54.810479 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6"] Jan 29 12:17:54 crc kubenswrapper[4840]: W0129 12:17:54.818083 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod676e6927_6277_4bf5_bb70_7b1142c3ff01.slice/crio-8312ee2e0a20ee9172173a8dddab6341a019bf9f9604038fb72246f3d43af559 WatchSource:0}: Error finding container 8312ee2e0a20ee9172173a8dddab6341a019bf9f9604038fb72246f3d43af559: Status 404 returned error can't find the container with id 8312ee2e0a20ee9172173a8dddab6341a019bf9f9604038fb72246f3d43af559 Jan 29 12:17:55 crc kubenswrapper[4840]: I0129 12:17:55.288251 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d67db987b-czh6b" event={"ID":"79216eef-9bac-48d7-b928-180f45e49d15","Type":"ContainerStarted","Data":"1018b3c9bb86f2fa58b07ec6075cbe232acf94902941793b327c7d3b5d6b6f3d"} Jan 29 12:17:55 crc kubenswrapper[4840]: I0129 12:17:55.288550 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6d67db987b-czh6b" event={"ID":"79216eef-9bac-48d7-b928-180f45e49d15","Type":"ContainerStarted","Data":"898019617cafd6220376ed37b7563bb40f776d8f1d9b15b8ef324b16bd644edf"} Jan 29 12:17:55 crc kubenswrapper[4840]: I0129 12:17:55.290119 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" event={"ID":"3a8ff3f5-bc33-4d34-8a3d-9947121b510d","Type":"ContainerStarted","Data":"b39297d66d714b4e9a023019347b386fb6c2948871f91934083a44e094570a77"} Jan 29 12:17:55 crc kubenswrapper[4840]: I0129 12:17:55.294725 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" event={"ID":"676e6927-6277-4bf5-bb70-7b1142c3ff01","Type":"ContainerStarted","Data":"8312ee2e0a20ee9172173a8dddab6341a019bf9f9604038fb72246f3d43af559"} Jan 29 12:17:55 crc kubenswrapper[4840]: I0129 12:17:55.309713 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6d67db987b-czh6b" podStartSLOduration=2.309695418 podStartE2EDuration="2.309695418s" podCreationTimestamp="2026-01-29 12:17:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:17:55.303801283 +0000 UTC m=+806.966781176" watchObservedRunningTime="2026-01-29 12:17:55.309695418 +0000 UTC m=+806.972675311" Jan 29 12:17:56 crc kubenswrapper[4840]: I0129 12:17:56.305816 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" event={"ID":"3a8ff3f5-bc33-4d34-8a3d-9947121b510d","Type":"ContainerStarted","Data":"4fb65cf457ba1f30750a7ce76836adeb64090fe15519137655576bab85ebb1c1"} Jan 29 12:17:56 crc kubenswrapper[4840]: I0129 12:17:56.306781 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" Jan 29 12:17:56 crc kubenswrapper[4840]: I0129 12:17:56.307580 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-htgbx" event={"ID":"bab7acdd-5080-4d9f-9889-81c2a88f0dc7","Type":"ContainerStarted","Data":"802562b5831bb96ac512faa4a7159e9f1f500b94f124c9374d78bc39927bd573"} Jan 29 12:17:56 crc kubenswrapper[4840]: I0129 12:17:56.331430 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" podStartSLOduration=2.119200638 podStartE2EDuration="3.331396202s" podCreationTimestamp="2026-01-29 12:17:53 +0000 UTC" firstStartedPulling="2026-01-29 12:17:54.802365914 +0000 UTC m=+806.465345797" lastFinishedPulling="2026-01-29 12:17:56.014561468 +0000 UTC m=+807.677541361" observedRunningTime="2026-01-29 12:17:56.322299312 +0000 UTC m=+807.985279265" watchObservedRunningTime="2026-01-29 12:17:56.331396202 +0000 UTC m=+807.994376095" Jan 29 12:17:57 crc kubenswrapper[4840]: I0129 12:17:57.319745 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" event={"ID":"676e6927-6277-4bf5-bb70-7b1142c3ff01","Type":"ContainerStarted","Data":"ed5cbbc14c7c456fa33e3a8b519a140dcd01757cee148973bf062a828187a90c"} Jan 29 12:17:57 crc kubenswrapper[4840]: I0129 12:17:57.323168 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-g88ch" event={"ID":"1d956b0e-870f-4713-af30-f1726121d630","Type":"ContainerStarted","Data":"1793aacf8361fe5aa82e1e62f1a29ff8d85d9c0c5ad82011f57f26d60bc7fa28"} Jan 29 12:17:57 crc kubenswrapper[4840]: I0129 12:17:57.348916 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-45hj6" podStartSLOduration=2.170015785 podStartE2EDuration="4.348887505s" podCreationTimestamp="2026-01-29 12:17:53 +0000 UTC" firstStartedPulling="2026-01-29 12:17:54.820669215 +0000 UTC m=+806.483649098" lastFinishedPulling="2026-01-29 12:17:56.999540925 +0000 UTC m=+808.662520818" observedRunningTime="2026-01-29 12:17:57.338423879 +0000 UTC m=+809.001403772" watchObservedRunningTime="2026-01-29 12:17:57.348887505 +0000 UTC m=+809.011867408" Jan 29 12:17:57 crc kubenswrapper[4840]: I0129 12:17:57.365988 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-g88ch" podStartSLOduration=2.200435455 podStartE2EDuration="4.365921583s" podCreationTimestamp="2026-01-29 12:17:53 +0000 UTC" firstStartedPulling="2026-01-29 12:17:53.874756665 +0000 UTC m=+805.537736558" lastFinishedPulling="2026-01-29 12:17:56.040242793 +0000 UTC m=+807.703222686" observedRunningTime="2026-01-29 12:17:57.365354347 +0000 UTC m=+809.028334240" watchObservedRunningTime="2026-01-29 12:17:57.365921583 +0000 UTC m=+809.028901496" Jan 29 12:17:58 crc kubenswrapper[4840]: I0129 12:17:58.337001 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:17:59 crc kubenswrapper[4840]: I0129 12:17:59.347200 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-htgbx" event={"ID":"bab7acdd-5080-4d9f-9889-81c2a88f0dc7","Type":"ContainerStarted","Data":"06bc4e2c439399a0c52a472b698f69f0d403ca3c1dbd6c00d99b99e94960238d"} Jan 29 12:17:59 crc kubenswrapper[4840]: I0129 12:17:59.384050 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-htgbx" podStartSLOduration=1.93526314 podStartE2EDuration="6.384020954s" podCreationTimestamp="2026-01-29 12:17:53 +0000 UTC" firstStartedPulling="2026-01-29 12:17:54.043714029 +0000 UTC m=+805.706693922" lastFinishedPulling="2026-01-29 12:17:58.492471843 +0000 UTC m=+810.155451736" observedRunningTime="2026-01-29 12:17:59.37209355 +0000 UTC m=+811.035073463" watchObservedRunningTime="2026-01-29 12:17:59.384020954 +0000 UTC m=+811.047000937" Jan 29 12:18:03 crc kubenswrapper[4840]: I0129 12:18:03.864127 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-g88ch" Jan 29 12:18:04 crc kubenswrapper[4840]: I0129 12:18:04.145203 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:18:04 crc kubenswrapper[4840]: I0129 12:18:04.145278 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:18:04 crc kubenswrapper[4840]: I0129 12:18:04.153337 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:18:04 crc kubenswrapper[4840]: I0129 12:18:04.382197 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6d67db987b-czh6b" Jan 29 12:18:04 crc kubenswrapper[4840]: I0129 12:18:04.437322 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dltrs"] Jan 29 12:18:14 crc kubenswrapper[4840]: I0129 12:18:14.390929 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-bq4k4" Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.509112 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s"] Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.511038 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.514163 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.521499 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s"] Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.599329 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s\" (UID: \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.599382 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s\" (UID: \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.599466 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmlcp\" (UniqueName: \"kubernetes.io/projected/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-kube-api-access-nmlcp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s\" (UID: \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.700906 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s\" (UID: \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.700994 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmlcp\" (UniqueName: \"kubernetes.io/projected/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-kube-api-access-nmlcp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s\" (UID: \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.701047 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s\" (UID: \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.701525 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s\" (UID: \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.701547 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s\" (UID: \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.724987 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmlcp\" (UniqueName: \"kubernetes.io/projected/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-kube-api-access-nmlcp\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s\" (UID: \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:28 crc kubenswrapper[4840]: I0129 12:18:28.833550 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.049740 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s"] Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.486866 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-dltrs" podUID="4832ef07-2202-4b5b-9ed5-70bc621ea8dd" containerName="console" containerID="cri-o://298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87" gracePeriod=15 Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.544228 4840 generic.go:334] "Generic (PLEG): container finished" podID="a9fe01c2-57da-44b1-be60-e0d27ffc98b8" containerID="c16e478eef76b7cb9801c28f6b2c3f675e6767bccaca24881f827a06a88bba29" exitCode=0 Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.544286 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" event={"ID":"a9fe01c2-57da-44b1-be60-e0d27ffc98b8","Type":"ContainerDied","Data":"c16e478eef76b7cb9801c28f6b2c3f675e6767bccaca24881f827a06a88bba29"} Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.544358 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" event={"ID":"a9fe01c2-57da-44b1-be60-e0d27ffc98b8","Type":"ContainerStarted","Data":"6630788efa1eb7d217cd4ed3da21dda0b7b37319b53e5d92f24a5f2c37818ecf"} Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.836248 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dltrs_4832ef07-2202-4b5b-9ed5-70bc621ea8dd/console/0.log" Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.836325 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.917150 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-service-ca\") pod \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.917802 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-oauth-serving-cert\") pod \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.917880 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-oauth-config\") pod \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.917920 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-config\") pod \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.918041 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-serving-cert\") pod \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.918072 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-trusted-ca-bundle\") pod \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.918168 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnw4t\" (UniqueName: \"kubernetes.io/projected/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-kube-api-access-cnw4t\") pod \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\" (UID: \"4832ef07-2202-4b5b-9ed5-70bc621ea8dd\") " Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.918704 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-service-ca" (OuterVolumeSpecName: "service-ca") pod "4832ef07-2202-4b5b-9ed5-70bc621ea8dd" (UID: "4832ef07-2202-4b5b-9ed5-70bc621ea8dd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.918743 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "4832ef07-2202-4b5b-9ed5-70bc621ea8dd" (UID: "4832ef07-2202-4b5b-9ed5-70bc621ea8dd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.918730 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-config" (OuterVolumeSpecName: "console-config") pod "4832ef07-2202-4b5b-9ed5-70bc621ea8dd" (UID: "4832ef07-2202-4b5b-9ed5-70bc621ea8dd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.919416 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "4832ef07-2202-4b5b-9ed5-70bc621ea8dd" (UID: "4832ef07-2202-4b5b-9ed5-70bc621ea8dd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.926465 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "4832ef07-2202-4b5b-9ed5-70bc621ea8dd" (UID: "4832ef07-2202-4b5b-9ed5-70bc621ea8dd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.926546 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-kube-api-access-cnw4t" (OuterVolumeSpecName: "kube-api-access-cnw4t") pod "4832ef07-2202-4b5b-9ed5-70bc621ea8dd" (UID: "4832ef07-2202-4b5b-9ed5-70bc621ea8dd"). InnerVolumeSpecName "kube-api-access-cnw4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:18:29 crc kubenswrapper[4840]: I0129 12:18:29.927432 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "4832ef07-2202-4b5b-9ed5-70bc621ea8dd" (UID: "4832ef07-2202-4b5b-9ed5-70bc621ea8dd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.019899 4840 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.019934 4840 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.019961 4840 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.019971 4840 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-config\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.019979 4840 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.019987 4840 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.019996 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnw4t\" (UniqueName: \"kubernetes.io/projected/4832ef07-2202-4b5b-9ed5-70bc621ea8dd-kube-api-access-cnw4t\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.551762 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-dltrs_4832ef07-2202-4b5b-9ed5-70bc621ea8dd/console/0.log" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.551838 4840 generic.go:334] "Generic (PLEG): container finished" podID="4832ef07-2202-4b5b-9ed5-70bc621ea8dd" containerID="298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87" exitCode=2 Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.551873 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dltrs" event={"ID":"4832ef07-2202-4b5b-9ed5-70bc621ea8dd","Type":"ContainerDied","Data":"298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87"} Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.551933 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-dltrs" event={"ID":"4832ef07-2202-4b5b-9ed5-70bc621ea8dd","Type":"ContainerDied","Data":"c1a7d91731d1ee8018dc7b6949e1b2ee6046161730526158a836cb4f3217be49"} Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.551935 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-dltrs" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.551963 4840 scope.go:117] "RemoveContainer" containerID="298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.573069 4840 scope.go:117] "RemoveContainer" containerID="298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87" Jan 29 12:18:30 crc kubenswrapper[4840]: E0129 12:18:30.574617 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87\": container with ID starting with 298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87 not found: ID does not exist" containerID="298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.574674 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87"} err="failed to get container status \"298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87\": rpc error: code = NotFound desc = could not find container \"298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87\": container with ID starting with 298c9f9c9b00afd41e17732a0058f847481b6e1a8d1573a64bfa2fed20efeb87 not found: ID does not exist" Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.581108 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-dltrs"] Jan 29 12:18:30 crc kubenswrapper[4840]: I0129 12:18:30.584841 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-dltrs"] Jan 29 12:18:31 crc kubenswrapper[4840]: I0129 12:18:31.009418 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4832ef07-2202-4b5b-9ed5-70bc621ea8dd" path="/var/lib/kubelet/pods/4832ef07-2202-4b5b-9ed5-70bc621ea8dd/volumes" Jan 29 12:18:31 crc kubenswrapper[4840]: I0129 12:18:31.563720 4840 generic.go:334] "Generic (PLEG): container finished" podID="a9fe01c2-57da-44b1-be60-e0d27ffc98b8" containerID="956fb3e3c76fb94e2f6aa4cfcceaf38c8becbc45cbbbdb749bdde178e69c0904" exitCode=0 Jan 29 12:18:31 crc kubenswrapper[4840]: I0129 12:18:31.563794 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" event={"ID":"a9fe01c2-57da-44b1-be60-e0d27ffc98b8","Type":"ContainerDied","Data":"956fb3e3c76fb94e2f6aa4cfcceaf38c8becbc45cbbbdb749bdde178e69c0904"} Jan 29 12:18:32 crc kubenswrapper[4840]: I0129 12:18:32.583914 4840 generic.go:334] "Generic (PLEG): container finished" podID="a9fe01c2-57da-44b1-be60-e0d27ffc98b8" containerID="f5dd48dfcb17233f7e9235cd4ff6c3c1715f3819fa071bf413b258666b0b6014" exitCode=0 Jan 29 12:18:32 crc kubenswrapper[4840]: I0129 12:18:32.584001 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" event={"ID":"a9fe01c2-57da-44b1-be60-e0d27ffc98b8","Type":"ContainerDied","Data":"f5dd48dfcb17233f7e9235cd4ff6c3c1715f3819fa071bf413b258666b0b6014"} Jan 29 12:18:33 crc kubenswrapper[4840]: I0129 12:18:33.821261 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:33 crc kubenswrapper[4840]: I0129 12:18:33.870307 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-util\") pod \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\" (UID: \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\") " Jan 29 12:18:33 crc kubenswrapper[4840]: I0129 12:18:33.870453 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-bundle\") pod \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\" (UID: \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\") " Jan 29 12:18:33 crc kubenswrapper[4840]: I0129 12:18:33.870491 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmlcp\" (UniqueName: \"kubernetes.io/projected/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-kube-api-access-nmlcp\") pod \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\" (UID: \"a9fe01c2-57da-44b1-be60-e0d27ffc98b8\") " Jan 29 12:18:33 crc kubenswrapper[4840]: I0129 12:18:33.871435 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-bundle" (OuterVolumeSpecName: "bundle") pod "a9fe01c2-57da-44b1-be60-e0d27ffc98b8" (UID: "a9fe01c2-57da-44b1-be60-e0d27ffc98b8"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:18:33 crc kubenswrapper[4840]: I0129 12:18:33.876525 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-kube-api-access-nmlcp" (OuterVolumeSpecName: "kube-api-access-nmlcp") pod "a9fe01c2-57da-44b1-be60-e0d27ffc98b8" (UID: "a9fe01c2-57da-44b1-be60-e0d27ffc98b8"). InnerVolumeSpecName "kube-api-access-nmlcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:18:33 crc kubenswrapper[4840]: I0129 12:18:33.883211 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-util" (OuterVolumeSpecName: "util") pod "a9fe01c2-57da-44b1-be60-e0d27ffc98b8" (UID: "a9fe01c2-57da-44b1-be60-e0d27ffc98b8"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:18:33 crc kubenswrapper[4840]: I0129 12:18:33.972241 4840 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:33 crc kubenswrapper[4840]: I0129 12:18:33.972276 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmlcp\" (UniqueName: \"kubernetes.io/projected/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-kube-api-access-nmlcp\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:33 crc kubenswrapper[4840]: I0129 12:18:33.972286 4840 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a9fe01c2-57da-44b1-be60-e0d27ffc98b8-util\") on node \"crc\" DevicePath \"\"" Jan 29 12:18:34 crc kubenswrapper[4840]: I0129 12:18:34.599411 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" event={"ID":"a9fe01c2-57da-44b1-be60-e0d27ffc98b8","Type":"ContainerDied","Data":"6630788efa1eb7d217cd4ed3da21dda0b7b37319b53e5d92f24a5f2c37818ecf"} Jan 29 12:18:34 crc kubenswrapper[4840]: I0129 12:18:34.599464 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6630788efa1eb7d217cd4ed3da21dda0b7b37319b53e5d92f24a5f2c37818ecf" Jan 29 12:18:34 crc kubenswrapper[4840]: I0129 12:18:34.599502 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.069525 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x"] Jan 29 12:18:43 crc kubenswrapper[4840]: E0129 12:18:43.070239 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9fe01c2-57da-44b1-be60-e0d27ffc98b8" containerName="extract" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.070251 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9fe01c2-57da-44b1-be60-e0d27ffc98b8" containerName="extract" Jan 29 12:18:43 crc kubenswrapper[4840]: E0129 12:18:43.070268 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9fe01c2-57da-44b1-be60-e0d27ffc98b8" containerName="pull" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.070274 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9fe01c2-57da-44b1-be60-e0d27ffc98b8" containerName="pull" Jan 29 12:18:43 crc kubenswrapper[4840]: E0129 12:18:43.070290 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4832ef07-2202-4b5b-9ed5-70bc621ea8dd" containerName="console" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.070296 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4832ef07-2202-4b5b-9ed5-70bc621ea8dd" containerName="console" Jan 29 12:18:43 crc kubenswrapper[4840]: E0129 12:18:43.070310 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9fe01c2-57da-44b1-be60-e0d27ffc98b8" containerName="util" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.070315 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9fe01c2-57da-44b1-be60-e0d27ffc98b8" containerName="util" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.070403 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="4832ef07-2202-4b5b-9ed5-70bc621ea8dd" containerName="console" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.070414 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9fe01c2-57da-44b1-be60-e0d27ffc98b8" containerName="extract" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.070805 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.074185 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.074277 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.074467 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.075244 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-bqvsg" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.086074 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.094824 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x"] Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.191195 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccb6k\" (UniqueName: \"kubernetes.io/projected/d7492c31-2b6c-43e4-9cc9-73e03dd15384-kube-api-access-ccb6k\") pod \"metallb-operator-controller-manager-77c47cd585-6mx2x\" (UID: \"d7492c31-2b6c-43e4-9cc9-73e03dd15384\") " pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.191489 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7492c31-2b6c-43e4-9cc9-73e03dd15384-apiservice-cert\") pod \"metallb-operator-controller-manager-77c47cd585-6mx2x\" (UID: \"d7492c31-2b6c-43e4-9cc9-73e03dd15384\") " pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.191652 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7492c31-2b6c-43e4-9cc9-73e03dd15384-webhook-cert\") pod \"metallb-operator-controller-manager-77c47cd585-6mx2x\" (UID: \"d7492c31-2b6c-43e4-9cc9-73e03dd15384\") " pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.292683 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7492c31-2b6c-43e4-9cc9-73e03dd15384-webhook-cert\") pod \"metallb-operator-controller-manager-77c47cd585-6mx2x\" (UID: \"d7492c31-2b6c-43e4-9cc9-73e03dd15384\") " pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.292766 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccb6k\" (UniqueName: \"kubernetes.io/projected/d7492c31-2b6c-43e4-9cc9-73e03dd15384-kube-api-access-ccb6k\") pod \"metallb-operator-controller-manager-77c47cd585-6mx2x\" (UID: \"d7492c31-2b6c-43e4-9cc9-73e03dd15384\") " pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.292803 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7492c31-2b6c-43e4-9cc9-73e03dd15384-apiservice-cert\") pod \"metallb-operator-controller-manager-77c47cd585-6mx2x\" (UID: \"d7492c31-2b6c-43e4-9cc9-73e03dd15384\") " pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.298641 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d7492c31-2b6c-43e4-9cc9-73e03dd15384-apiservice-cert\") pod \"metallb-operator-controller-manager-77c47cd585-6mx2x\" (UID: \"d7492c31-2b6c-43e4-9cc9-73e03dd15384\") " pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.299002 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d7492c31-2b6c-43e4-9cc9-73e03dd15384-webhook-cert\") pod \"metallb-operator-controller-manager-77c47cd585-6mx2x\" (UID: \"d7492c31-2b6c-43e4-9cc9-73e03dd15384\") " pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.312804 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccb6k\" (UniqueName: \"kubernetes.io/projected/d7492c31-2b6c-43e4-9cc9-73e03dd15384-kube-api-access-ccb6k\") pod \"metallb-operator-controller-manager-77c47cd585-6mx2x\" (UID: \"d7492c31-2b6c-43e4-9cc9-73e03dd15384\") " pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.389185 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.399874 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk"] Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.400637 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.402936 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.403240 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-zcjdl" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.403457 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.438173 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk"] Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.495621 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7mnb\" (UniqueName: \"kubernetes.io/projected/e2c078cf-cce4-480b-9c30-f0c86abee27d-kube-api-access-q7mnb\") pod \"metallb-operator-webhook-server-569cfcf96-gqqkk\" (UID: \"e2c078cf-cce4-480b-9c30-f0c86abee27d\") " pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.496083 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2c078cf-cce4-480b-9c30-f0c86abee27d-webhook-cert\") pod \"metallb-operator-webhook-server-569cfcf96-gqqkk\" (UID: \"e2c078cf-cce4-480b-9c30-f0c86abee27d\") " pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.496116 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2c078cf-cce4-480b-9c30-f0c86abee27d-apiservice-cert\") pod \"metallb-operator-webhook-server-569cfcf96-gqqkk\" (UID: \"e2c078cf-cce4-480b-9c30-f0c86abee27d\") " pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.596929 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2c078cf-cce4-480b-9c30-f0c86abee27d-webhook-cert\") pod \"metallb-operator-webhook-server-569cfcf96-gqqkk\" (UID: \"e2c078cf-cce4-480b-9c30-f0c86abee27d\") " pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.597005 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2c078cf-cce4-480b-9c30-f0c86abee27d-apiservice-cert\") pod \"metallb-operator-webhook-server-569cfcf96-gqqkk\" (UID: \"e2c078cf-cce4-480b-9c30-f0c86abee27d\") " pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.597097 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7mnb\" (UniqueName: \"kubernetes.io/projected/e2c078cf-cce4-480b-9c30-f0c86abee27d-kube-api-access-q7mnb\") pod \"metallb-operator-webhook-server-569cfcf96-gqqkk\" (UID: \"e2c078cf-cce4-480b-9c30-f0c86abee27d\") " pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.602667 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e2c078cf-cce4-480b-9c30-f0c86abee27d-webhook-cert\") pod \"metallb-operator-webhook-server-569cfcf96-gqqkk\" (UID: \"e2c078cf-cce4-480b-9c30-f0c86abee27d\") " pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.604050 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e2c078cf-cce4-480b-9c30-f0c86abee27d-apiservice-cert\") pod \"metallb-operator-webhook-server-569cfcf96-gqqkk\" (UID: \"e2c078cf-cce4-480b-9c30-f0c86abee27d\") " pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.613538 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7mnb\" (UniqueName: \"kubernetes.io/projected/e2c078cf-cce4-480b-9c30-f0c86abee27d-kube-api-access-q7mnb\") pod \"metallb-operator-webhook-server-569cfcf96-gqqkk\" (UID: \"e2c078cf-cce4-480b-9c30-f0c86abee27d\") " pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.714599 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x"] Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.815360 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:18:43 crc kubenswrapper[4840]: I0129 12:18:43.998022 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk"] Jan 29 12:18:44 crc kubenswrapper[4840]: W0129 12:18:44.005006 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2c078cf_cce4_480b_9c30_f0c86abee27d.slice/crio-ebea6186267b5bb726c5afff042c3bb95493602736bada7cd729b4a6ad6b9ae8 WatchSource:0}: Error finding container ebea6186267b5bb726c5afff042c3bb95493602736bada7cd729b4a6ad6b9ae8: Status 404 returned error can't find the container with id ebea6186267b5bb726c5afff042c3bb95493602736bada7cd729b4a6ad6b9ae8 Jan 29 12:18:44 crc kubenswrapper[4840]: I0129 12:18:44.658888 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" event={"ID":"d7492c31-2b6c-43e4-9cc9-73e03dd15384","Type":"ContainerStarted","Data":"3f9dd150bbbc8ff7565980ed37104a535289723e1a82948f9b514f0c37894bfb"} Jan 29 12:18:44 crc kubenswrapper[4840]: I0129 12:18:44.661702 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" event={"ID":"e2c078cf-cce4-480b-9c30-f0c86abee27d","Type":"ContainerStarted","Data":"ebea6186267b5bb726c5afff042c3bb95493602736bada7cd729b4a6ad6b9ae8"} Jan 29 12:19:00 crc kubenswrapper[4840]: E0129 12:19:00.435777 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/metallb-rhel9@sha256:dfdc96eec0d63a5abd9e75003d3ed847582118f9cc839ad1094baf866733699d" Jan 29 12:19:00 crc kubenswrapper[4840]: E0129 12:19:00.436669 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:webhook-server,Image:registry.redhat.io/openshift4/metallb-rhel9@sha256:dfdc96eec0d63a5abd9e75003d3ed847582118f9cc839ad1094baf866733699d,Command:[/controller],Args:[--disable-cert-rotation=true --port=7472 --log-level=info --webhook-mode=onlywebhook],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:monitoring,HostPort:0,ContainerPort:7472,Protocol:TCP,HostIP:,},ContainerPort{Name:webhook-server,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:METALLB_BGP_TYPE,Value:frr,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:metallb-operator.v4.18.0-202601071645,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7mnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/metrics,Port:{1 0 monitoring},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000700000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metallb-operator-webhook-server-569cfcf96-gqqkk_metallb-system(e2c078cf-cce4-480b-9c30-f0c86abee27d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 29 12:19:00 crc kubenswrapper[4840]: E0129 12:19:00.437825 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" podUID="e2c078cf-cce4-480b-9c30-f0c86abee27d" Jan 29 12:19:01 crc kubenswrapper[4840]: E0129 12:19:01.081821 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"webhook-server\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/metallb-rhel9@sha256:dfdc96eec0d63a5abd9e75003d3ed847582118f9cc839ad1094baf866733699d\\\"\"" pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" podUID="e2c078cf-cce4-480b-9c30-f0c86abee27d" Jan 29 12:19:01 crc kubenswrapper[4840]: I0129 12:19:01.777389 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" event={"ID":"d7492c31-2b6c-43e4-9cc9-73e03dd15384","Type":"ContainerStarted","Data":"7772877d7ee831654a7c011b793496863a49e95ad4c90083d5f9c7c9df7ed28b"} Jan 29 12:19:01 crc kubenswrapper[4840]: I0129 12:19:01.778238 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:19:01 crc kubenswrapper[4840]: I0129 12:19:01.802404 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" podStartSLOduration=1.439612087 podStartE2EDuration="18.802377526s" podCreationTimestamp="2026-01-29 12:18:43 +0000 UTC" firstStartedPulling="2026-01-29 12:18:43.728150366 +0000 UTC m=+855.391130269" lastFinishedPulling="2026-01-29 12:19:01.090915815 +0000 UTC m=+872.753895708" observedRunningTime="2026-01-29 12:19:01.801255426 +0000 UTC m=+873.464235329" watchObservedRunningTime="2026-01-29 12:19:01.802377526 +0000 UTC m=+873.465357419" Jan 29 12:19:15 crc kubenswrapper[4840]: I0129 12:19:15.865580 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" event={"ID":"e2c078cf-cce4-480b-9c30-f0c86abee27d","Type":"ContainerStarted","Data":"c80b4e6b462f4788a1137c4c10c80491bc744bb0d6e6e644f8fcb49d36179e8b"} Jan 29 12:19:15 crc kubenswrapper[4840]: I0129 12:19:15.867006 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:19:15 crc kubenswrapper[4840]: I0129 12:19:15.910837 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" podStartSLOduration=1.8776910679999999 podStartE2EDuration="32.910818938s" podCreationTimestamp="2026-01-29 12:18:43 +0000 UTC" firstStartedPulling="2026-01-29 12:18:44.00730105 +0000 UTC m=+855.670280943" lastFinishedPulling="2026-01-29 12:19:15.04042892 +0000 UTC m=+886.703408813" observedRunningTime="2026-01-29 12:19:15.90748699 +0000 UTC m=+887.570466883" watchObservedRunningTime="2026-01-29 12:19:15.910818938 +0000 UTC m=+887.573798831" Jan 29 12:19:33 crc kubenswrapper[4840]: I0129 12:19:33.393649 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-77c47cd585-6mx2x" Jan 29 12:19:33 crc kubenswrapper[4840]: I0129 12:19:33.822386 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-569cfcf96-gqqkk" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.131050 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-jtlnm"] Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.134586 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.136093 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q"] Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.137157 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.137636 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.138490 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.138827 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-fwv4k" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.156241 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q"] Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.157452 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.234539 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-g26s9"] Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.235065 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96f69b16-80eb-4b5c-bf75-39c26aefd643-metrics-certs\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.235180 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5m7v\" (UniqueName: \"kubernetes.io/projected/96f69b16-80eb-4b5c-bf75-39c26aefd643-kube-api-access-b5m7v\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.235249 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/96f69b16-80eb-4b5c-bf75-39c26aefd643-reloader\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.235270 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/96f69b16-80eb-4b5c-bf75-39c26aefd643-metrics\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.235455 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57g2k\" (UniqueName: \"kubernetes.io/projected/b91baf2c-fd04-4784-8d12-a5318088ee87-kube-api-access-57g2k\") pod \"frr-k8s-webhook-server-7df86c4f6c-k5x6q\" (UID: \"b91baf2c-fd04-4784-8d12-a5318088ee87\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.235587 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b91baf2c-fd04-4784-8d12-a5318088ee87-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-k5x6q\" (UID: \"b91baf2c-fd04-4784-8d12-a5318088ee87\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.235814 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/96f69b16-80eb-4b5c-bf75-39c26aefd643-frr-sockets\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.235870 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.235875 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/96f69b16-80eb-4b5c-bf75-39c26aefd643-frr-startup\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.235987 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/96f69b16-80eb-4b5c-bf75-39c26aefd643-frr-conf\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.238699 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-gt8dd" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.239068 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.239310 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.247106 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.258989 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-wpf9f"] Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.260188 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.262840 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.282547 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-wpf9f"] Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337096 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/96f69b16-80eb-4b5c-bf75-39c26aefd643-frr-sockets\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337153 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/96f69b16-80eb-4b5c-bf75-39c26aefd643-frr-startup\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337199 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f76l6\" (UniqueName: \"kubernetes.io/projected/d3338d0d-1a9b-4176-84b4-a708ea1b574c-kube-api-access-f76l6\") pod \"controller-6968d8fdc4-wpf9f\" (UID: \"d3338d0d-1a9b-4176-84b4-a708ea1b574c\") " pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337228 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/96f69b16-80eb-4b5c-bf75-39c26aefd643-frr-conf\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337256 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96f69b16-80eb-4b5c-bf75-39c26aefd643-metrics-certs\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337303 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5m7v\" (UniqueName: \"kubernetes.io/projected/96f69b16-80eb-4b5c-bf75-39c26aefd643-kube-api-access-b5m7v\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337327 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpfs2\" (UniqueName: \"kubernetes.io/projected/1e0994f2-a882-4579-b76f-e953f6b75a25-kube-api-access-dpfs2\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337357 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/96f69b16-80eb-4b5c-bf75-39c26aefd643-reloader\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337381 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/96f69b16-80eb-4b5c-bf75-39c26aefd643-metrics\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337408 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1e0994f2-a882-4579-b76f-e953f6b75a25-metallb-excludel2\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337437 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3338d0d-1a9b-4176-84b4-a708ea1b574c-cert\") pod \"controller-6968d8fdc4-wpf9f\" (UID: \"d3338d0d-1a9b-4176-84b4-a708ea1b574c\") " pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337470 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57g2k\" (UniqueName: \"kubernetes.io/projected/b91baf2c-fd04-4784-8d12-a5318088ee87-kube-api-access-57g2k\") pod \"frr-k8s-webhook-server-7df86c4f6c-k5x6q\" (UID: \"b91baf2c-fd04-4784-8d12-a5318088ee87\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337497 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-memberlist\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337525 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-metrics-certs\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337555 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b91baf2c-fd04-4784-8d12-a5318088ee87-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-k5x6q\" (UID: \"b91baf2c-fd04-4784-8d12-a5318088ee87\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.337585 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d3338d0d-1a9b-4176-84b4-a708ea1b574c-metrics-certs\") pod \"controller-6968d8fdc4-wpf9f\" (UID: \"d3338d0d-1a9b-4176-84b4-a708ea1b574c\") " pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.338107 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/96f69b16-80eb-4b5c-bf75-39c26aefd643-frr-sockets\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.339450 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/96f69b16-80eb-4b5c-bf75-39c26aefd643-frr-startup\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.339686 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/96f69b16-80eb-4b5c-bf75-39c26aefd643-frr-conf\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: E0129 12:19:34.339779 4840 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Jan 29 12:19:34 crc kubenswrapper[4840]: E0129 12:19:34.339834 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96f69b16-80eb-4b5c-bf75-39c26aefd643-metrics-certs podName:96f69b16-80eb-4b5c-bf75-39c26aefd643 nodeName:}" failed. No retries permitted until 2026-01-29 12:19:34.839815707 +0000 UTC m=+906.502795600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/96f69b16-80eb-4b5c-bf75-39c26aefd643-metrics-certs") pod "frr-k8s-jtlnm" (UID: "96f69b16-80eb-4b5c-bf75-39c26aefd643") : secret "frr-k8s-certs-secret" not found Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.340498 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/96f69b16-80eb-4b5c-bf75-39c26aefd643-reloader\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.340715 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/96f69b16-80eb-4b5c-bf75-39c26aefd643-metrics\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: E0129 12:19:34.340983 4840 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 29 12:19:34 crc kubenswrapper[4840]: E0129 12:19:34.341039 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b91baf2c-fd04-4784-8d12-a5318088ee87-cert podName:b91baf2c-fd04-4784-8d12-a5318088ee87 nodeName:}" failed. No retries permitted until 2026-01-29 12:19:34.84102879 +0000 UTC m=+906.504008683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/b91baf2c-fd04-4784-8d12-a5318088ee87-cert") pod "frr-k8s-webhook-server-7df86c4f6c-k5x6q" (UID: "b91baf2c-fd04-4784-8d12-a5318088ee87") : secret "frr-k8s-webhook-server-cert" not found Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.366975 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57g2k\" (UniqueName: \"kubernetes.io/projected/b91baf2c-fd04-4784-8d12-a5318088ee87-kube-api-access-57g2k\") pod \"frr-k8s-webhook-server-7df86c4f6c-k5x6q\" (UID: \"b91baf2c-fd04-4784-8d12-a5318088ee87\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.378294 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5m7v\" (UniqueName: \"kubernetes.io/projected/96f69b16-80eb-4b5c-bf75-39c26aefd643-kube-api-access-b5m7v\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.438624 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpfs2\" (UniqueName: \"kubernetes.io/projected/1e0994f2-a882-4579-b76f-e953f6b75a25-kube-api-access-dpfs2\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.438715 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1e0994f2-a882-4579-b76f-e953f6b75a25-metallb-excludel2\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.438761 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3338d0d-1a9b-4176-84b4-a708ea1b574c-cert\") pod \"controller-6968d8fdc4-wpf9f\" (UID: \"d3338d0d-1a9b-4176-84b4-a708ea1b574c\") " pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.438796 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-memberlist\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.438828 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-metrics-certs\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.438877 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d3338d0d-1a9b-4176-84b4-a708ea1b574c-metrics-certs\") pod \"controller-6968d8fdc4-wpf9f\" (UID: \"d3338d0d-1a9b-4176-84b4-a708ea1b574c\") " pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.438916 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f76l6\" (UniqueName: \"kubernetes.io/projected/d3338d0d-1a9b-4176-84b4-a708ea1b574c-kube-api-access-f76l6\") pod \"controller-6968d8fdc4-wpf9f\" (UID: \"d3338d0d-1a9b-4176-84b4-a708ea1b574c\") " pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:34 crc kubenswrapper[4840]: E0129 12:19:34.439396 4840 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 12:19:34 crc kubenswrapper[4840]: E0129 12:19:34.439480 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-memberlist podName:1e0994f2-a882-4579-b76f-e953f6b75a25 nodeName:}" failed. No retries permitted until 2026-01-29 12:19:34.939456564 +0000 UTC m=+906.602436477 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-memberlist") pod "speaker-g26s9" (UID: "1e0994f2-a882-4579-b76f-e953f6b75a25") : secret "metallb-memberlist" not found Jan 29 12:19:34 crc kubenswrapper[4840]: E0129 12:19:34.439780 4840 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.439827 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/1e0994f2-a882-4579-b76f-e953f6b75a25-metallb-excludel2\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: E0129 12:19:34.439835 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-metrics-certs podName:1e0994f2-a882-4579-b76f-e953f6b75a25 nodeName:}" failed. No retries permitted until 2026-01-29 12:19:34.939820783 +0000 UTC m=+906.602800686 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-metrics-certs") pod "speaker-g26s9" (UID: "1e0994f2-a882-4579-b76f-e953f6b75a25") : secret "speaker-certs-secret" not found Jan 29 12:19:34 crc kubenswrapper[4840]: E0129 12:19:34.439903 4840 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 29 12:19:34 crc kubenswrapper[4840]: E0129 12:19:34.439937 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d3338d0d-1a9b-4176-84b4-a708ea1b574c-metrics-certs podName:d3338d0d-1a9b-4176-84b4-a708ea1b574c nodeName:}" failed. No retries permitted until 2026-01-29 12:19:34.939926586 +0000 UTC m=+906.602906589 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d3338d0d-1a9b-4176-84b4-a708ea1b574c-metrics-certs") pod "controller-6968d8fdc4-wpf9f" (UID: "d3338d0d-1a9b-4176-84b4-a708ea1b574c") : secret "controller-certs-secret" not found Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.445314 4840 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.455657 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d3338d0d-1a9b-4176-84b4-a708ea1b574c-cert\") pod \"controller-6968d8fdc4-wpf9f\" (UID: \"d3338d0d-1a9b-4176-84b4-a708ea1b574c\") " pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.457009 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpfs2\" (UniqueName: \"kubernetes.io/projected/1e0994f2-a882-4579-b76f-e953f6b75a25-kube-api-access-dpfs2\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.459608 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f76l6\" (UniqueName: \"kubernetes.io/projected/d3338d0d-1a9b-4176-84b4-a708ea1b574c-kube-api-access-f76l6\") pod \"controller-6968d8fdc4-wpf9f\" (UID: \"d3338d0d-1a9b-4176-84b4-a708ea1b574c\") " pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.844540 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b91baf2c-fd04-4784-8d12-a5318088ee87-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-k5x6q\" (UID: \"b91baf2c-fd04-4784-8d12-a5318088ee87\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.844988 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96f69b16-80eb-4b5c-bf75-39c26aefd643-metrics-certs\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.851028 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b91baf2c-fd04-4784-8d12-a5318088ee87-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-k5x6q\" (UID: \"b91baf2c-fd04-4784-8d12-a5318088ee87\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.854069 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/96f69b16-80eb-4b5c-bf75-39c26aefd643-metrics-certs\") pod \"frr-k8s-jtlnm\" (UID: \"96f69b16-80eb-4b5c-bf75-39c26aefd643\") " pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.946325 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-memberlist\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.946428 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-metrics-certs\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.946475 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d3338d0d-1a9b-4176-84b4-a708ea1b574c-metrics-certs\") pod \"controller-6968d8fdc4-wpf9f\" (UID: \"d3338d0d-1a9b-4176-84b4-a708ea1b574c\") " pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:34 crc kubenswrapper[4840]: E0129 12:19:34.946525 4840 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 29 12:19:34 crc kubenswrapper[4840]: E0129 12:19:34.946605 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-memberlist podName:1e0994f2-a882-4579-b76f-e953f6b75a25 nodeName:}" failed. No retries permitted until 2026-01-29 12:19:35.946584646 +0000 UTC m=+907.609564539 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-memberlist") pod "speaker-g26s9" (UID: "1e0994f2-a882-4579-b76f-e953f6b75a25") : secret "metallb-memberlist" not found Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.950581 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-metrics-certs\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:34 crc kubenswrapper[4840]: I0129 12:19:34.951741 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d3338d0d-1a9b-4176-84b4-a708ea1b574c-metrics-certs\") pod \"controller-6968d8fdc4-wpf9f\" (UID: \"d3338d0d-1a9b-4176-84b4-a708ea1b574c\") " pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.061094 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.071736 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.177059 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.406105 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-wpf9f"] Jan 29 12:19:35 crc kubenswrapper[4840]: W0129 12:19:35.416220 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3338d0d_1a9b_4176_84b4_a708ea1b574c.slice/crio-f7c86a218f7a9949d24c07b88272a135cab5d32181f069af8c9c2d9f80c1f5bb WatchSource:0}: Error finding container f7c86a218f7a9949d24c07b88272a135cab5d32181f069af8c9c2d9f80c1f5bb: Status 404 returned error can't find the container with id f7c86a218f7a9949d24c07b88272a135cab5d32181f069af8c9c2d9f80c1f5bb Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.540000 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q"] Jan 29 12:19:35 crc kubenswrapper[4840]: W0129 12:19:35.544762 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb91baf2c_fd04_4784_8d12_a5318088ee87.slice/crio-3f02bd46fafe6275b4b2185f2a0f751c7aea1b0610d6583a7cf9939f441b216c WatchSource:0}: Error finding container 3f02bd46fafe6275b4b2185f2a0f751c7aea1b0610d6583a7cf9939f441b216c: Status 404 returned error can't find the container with id 3f02bd46fafe6275b4b2185f2a0f751c7aea1b0610d6583a7cf9939f441b216c Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.959618 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-memberlist\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.965671 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/1e0994f2-a882-4579-b76f-e953f6b75a25-memberlist\") pod \"speaker-g26s9\" (UID: \"1e0994f2-a882-4579-b76f-e953f6b75a25\") " pod="metallb-system/speaker-g26s9" Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.986225 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" event={"ID":"b91baf2c-fd04-4784-8d12-a5318088ee87","Type":"ContainerStarted","Data":"3f02bd46fafe6275b4b2185f2a0f751c7aea1b0610d6583a7cf9939f441b216c"} Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.987410 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jtlnm" event={"ID":"96f69b16-80eb-4b5c-bf75-39c26aefd643","Type":"ContainerStarted","Data":"b3d90063b144c8d4e7907f874ffa329f52998b4e9a7562c2a7e808838b8490b7"} Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.989148 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wpf9f" event={"ID":"d3338d0d-1a9b-4176-84b4-a708ea1b574c","Type":"ContainerStarted","Data":"62927b715085bb0ebb44416f5eb3cefb19ed7aac8e63c6d33b3492502f139c5a"} Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.989181 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wpf9f" event={"ID":"d3338d0d-1a9b-4176-84b4-a708ea1b574c","Type":"ContainerStarted","Data":"a51b0ca67d6800d9e9916f16400af9548b57aa83c0e231a2084dfec5d164768a"} Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.989193 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-wpf9f" event={"ID":"d3338d0d-1a9b-4176-84b4-a708ea1b574c","Type":"ContainerStarted","Data":"f7c86a218f7a9949d24c07b88272a135cab5d32181f069af8c9c2d9f80c1f5bb"} Jan 29 12:19:35 crc kubenswrapper[4840]: I0129 12:19:35.990218 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:36 crc kubenswrapper[4840]: I0129 12:19:36.020331 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-wpf9f" podStartSLOduration=2.020308997 podStartE2EDuration="2.020308997s" podCreationTimestamp="2026-01-29 12:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:19:36.015633273 +0000 UTC m=+907.678613166" watchObservedRunningTime="2026-01-29 12:19:36.020308997 +0000 UTC m=+907.683288890" Jan 29 12:19:36 crc kubenswrapper[4840]: I0129 12:19:36.059284 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g26s9" Jan 29 12:19:36 crc kubenswrapper[4840]: I0129 12:19:36.999545 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g26s9" event={"ID":"1e0994f2-a882-4579-b76f-e953f6b75a25","Type":"ContainerStarted","Data":"2ec914e0b0c74aec61a41e073e696898514829976ff01c707c0a96fcd07ca06a"} Jan 29 12:19:36 crc kubenswrapper[4840]: I0129 12:19:36.999588 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g26s9" event={"ID":"1e0994f2-a882-4579-b76f-e953f6b75a25","Type":"ContainerStarted","Data":"68ecc3f0dcb98a782863e0c66fe467102f4416fbb932ca65aa8ccdf49d90d37e"} Jan 29 12:19:36 crc kubenswrapper[4840]: I0129 12:19:36.999600 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g26s9" event={"ID":"1e0994f2-a882-4579-b76f-e953f6b75a25","Type":"ContainerStarted","Data":"fb0cb2d8186eee7d678fea085bb382ceee8bbc4837f50c17ec652244545e2320"} Jan 29 12:19:37 crc kubenswrapper[4840]: I0129 12:19:36.999847 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-g26s9" Jan 29 12:19:37 crc kubenswrapper[4840]: I0129 12:19:37.026322 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-g26s9" podStartSLOduration=3.026299292 podStartE2EDuration="3.026299292s" podCreationTimestamp="2026-01-29 12:19:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:19:37.024021542 +0000 UTC m=+908.687001435" watchObservedRunningTime="2026-01-29 12:19:37.026299292 +0000 UTC m=+908.689279175" Jan 29 12:19:43 crc kubenswrapper[4840]: I0129 12:19:43.046439 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" event={"ID":"b91baf2c-fd04-4784-8d12-a5318088ee87","Type":"ContainerStarted","Data":"53991f6d27a72c1af13eaf27afb0c12a632b20ac1c4a3f2848c46cd431c15c17"} Jan 29 12:19:43 crc kubenswrapper[4840]: I0129 12:19:43.047536 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" Jan 29 12:19:43 crc kubenswrapper[4840]: I0129 12:19:43.050533 4840 generic.go:334] "Generic (PLEG): container finished" podID="96f69b16-80eb-4b5c-bf75-39c26aefd643" containerID="9858408a38e81a4d574ec9b08a4313a75c9c7e6f6405c0fd48dec90f648121ea" exitCode=0 Jan 29 12:19:43 crc kubenswrapper[4840]: I0129 12:19:43.050578 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jtlnm" event={"ID":"96f69b16-80eb-4b5c-bf75-39c26aefd643","Type":"ContainerDied","Data":"9858408a38e81a4d574ec9b08a4313a75c9c7e6f6405c0fd48dec90f648121ea"} Jan 29 12:19:43 crc kubenswrapper[4840]: I0129 12:19:43.078961 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" podStartSLOduration=1.7756349729999998 podStartE2EDuration="9.078920496s" podCreationTimestamp="2026-01-29 12:19:34 +0000 UTC" firstStartedPulling="2026-01-29 12:19:35.546817642 +0000 UTC m=+907.209797525" lastFinishedPulling="2026-01-29 12:19:42.850103155 +0000 UTC m=+914.513083048" observedRunningTime="2026-01-29 12:19:43.073433489 +0000 UTC m=+914.736413382" watchObservedRunningTime="2026-01-29 12:19:43.078920496 +0000 UTC m=+914.741900389" Jan 29 12:19:44 crc kubenswrapper[4840]: I0129 12:19:44.059069 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jtlnm" event={"ID":"96f69b16-80eb-4b5c-bf75-39c26aefd643","Type":"ContainerDied","Data":"af095b4a353f6db43e083b7d03aac4695661b02df75ec0044789ff3b53cb347c"} Jan 29 12:19:44 crc kubenswrapper[4840]: I0129 12:19:44.058928 4840 generic.go:334] "Generic (PLEG): container finished" podID="96f69b16-80eb-4b5c-bf75-39c26aefd643" containerID="af095b4a353f6db43e083b7d03aac4695661b02df75ec0044789ff3b53cb347c" exitCode=0 Jan 29 12:19:45 crc kubenswrapper[4840]: I0129 12:19:45.070010 4840 generic.go:334] "Generic (PLEG): container finished" podID="96f69b16-80eb-4b5c-bf75-39c26aefd643" containerID="a31fbd0d00f5bf8aa68e8b8b920e2328e407b4c2004cc91e6a091c22bf04590c" exitCode=0 Jan 29 12:19:45 crc kubenswrapper[4840]: I0129 12:19:45.070186 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jtlnm" event={"ID":"96f69b16-80eb-4b5c-bf75-39c26aefd643","Type":"ContainerDied","Data":"a31fbd0d00f5bf8aa68e8b8b920e2328e407b4c2004cc91e6a091c22bf04590c"} Jan 29 12:19:45 crc kubenswrapper[4840]: I0129 12:19:45.185133 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-wpf9f" Jan 29 12:19:46 crc kubenswrapper[4840]: I0129 12:19:46.078642 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jtlnm" event={"ID":"96f69b16-80eb-4b5c-bf75-39c26aefd643","Type":"ContainerStarted","Data":"64b270f973ad5b88f3b42ba3da656bcfdbb64d320e7bdb68c1167ef5060de4ce"} Jan 29 12:19:46 crc kubenswrapper[4840]: I0129 12:19:46.078688 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jtlnm" event={"ID":"96f69b16-80eb-4b5c-bf75-39c26aefd643","Type":"ContainerStarted","Data":"846d8df87f7b992641315b8bf773060a4a8c44de1f01eb83300ce2d777f62503"} Jan 29 12:19:46 crc kubenswrapper[4840]: I0129 12:19:46.078698 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jtlnm" event={"ID":"96f69b16-80eb-4b5c-bf75-39c26aefd643","Type":"ContainerStarted","Data":"574d20162ed4e2841d458262ab6a77695230b8631a66eeb546678d3b19bef589"} Jan 29 12:19:46 crc kubenswrapper[4840]: I0129 12:19:46.270618 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-g26s9" Jan 29 12:19:47 crc kubenswrapper[4840]: I0129 12:19:47.090992 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jtlnm" event={"ID":"96f69b16-80eb-4b5c-bf75-39c26aefd643","Type":"ContainerStarted","Data":"0a253a71f20b94ecc4385256f16c6fb4204861643f67702c2d0520a9ca3c3b84"} Jan 29 12:19:47 crc kubenswrapper[4840]: I0129 12:19:47.091331 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:47 crc kubenswrapper[4840]: I0129 12:19:47.091356 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jtlnm" event={"ID":"96f69b16-80eb-4b5c-bf75-39c26aefd643","Type":"ContainerStarted","Data":"2fd93e96e57888ac6ec82296dcd4487b7c80d53f121289cb6a72e575a022b491"} Jan 29 12:19:47 crc kubenswrapper[4840]: I0129 12:19:47.091370 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-jtlnm" event={"ID":"96f69b16-80eb-4b5c-bf75-39c26aefd643","Type":"ContainerStarted","Data":"111ec797245f0d223ca80e993e2a89c7388f4685a843ed1ee23edaeebd441d11"} Jan 29 12:19:47 crc kubenswrapper[4840]: I0129 12:19:47.118753 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-jtlnm" podStartSLOduration=5.488504898 podStartE2EDuration="13.118726258s" podCreationTimestamp="2026-01-29 12:19:34 +0000 UTC" firstStartedPulling="2026-01-29 12:19:35.205239304 +0000 UTC m=+906.868219197" lastFinishedPulling="2026-01-29 12:19:42.835460664 +0000 UTC m=+914.498440557" observedRunningTime="2026-01-29 12:19:47.114863595 +0000 UTC m=+918.777843488" watchObservedRunningTime="2026-01-29 12:19:47.118726258 +0000 UTC m=+918.781706151" Jan 29 12:19:49 crc kubenswrapper[4840]: I0129 12:19:49.252586 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-sqtl4"] Jan 29 12:19:49 crc kubenswrapper[4840]: I0129 12:19:49.253961 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sqtl4" Jan 29 12:19:49 crc kubenswrapper[4840]: I0129 12:19:49.257338 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 29 12:19:49 crc kubenswrapper[4840]: I0129 12:19:49.257571 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-z2lm5" Jan 29 12:19:49 crc kubenswrapper[4840]: I0129 12:19:49.263335 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 29 12:19:49 crc kubenswrapper[4840]: I0129 12:19:49.272653 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sqtl4"] Jan 29 12:19:49 crc kubenswrapper[4840]: I0129 12:19:49.353561 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmxfk\" (UniqueName: \"kubernetes.io/projected/4f7befda-9fe2-4853-91ba-abefd8310a62-kube-api-access-zmxfk\") pod \"openstack-operator-index-sqtl4\" (UID: \"4f7befda-9fe2-4853-91ba-abefd8310a62\") " pod="openstack-operators/openstack-operator-index-sqtl4" Jan 29 12:19:49 crc kubenswrapper[4840]: I0129 12:19:49.454516 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmxfk\" (UniqueName: \"kubernetes.io/projected/4f7befda-9fe2-4853-91ba-abefd8310a62-kube-api-access-zmxfk\") pod \"openstack-operator-index-sqtl4\" (UID: \"4f7befda-9fe2-4853-91ba-abefd8310a62\") " pod="openstack-operators/openstack-operator-index-sqtl4" Jan 29 12:19:49 crc kubenswrapper[4840]: I0129 12:19:49.474248 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmxfk\" (UniqueName: \"kubernetes.io/projected/4f7befda-9fe2-4853-91ba-abefd8310a62-kube-api-access-zmxfk\") pod \"openstack-operator-index-sqtl4\" (UID: \"4f7befda-9fe2-4853-91ba-abefd8310a62\") " pod="openstack-operators/openstack-operator-index-sqtl4" Jan 29 12:19:49 crc kubenswrapper[4840]: I0129 12:19:49.573791 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sqtl4" Jan 29 12:19:49 crc kubenswrapper[4840]: I0129 12:19:49.868986 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-sqtl4"] Jan 29 12:19:49 crc kubenswrapper[4840]: W0129 12:19:49.876824 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f7befda_9fe2_4853_91ba_abefd8310a62.slice/crio-888019cf5b102ee9289f49c0f1e027a185422473ed5e2163fc6c10451acdfef1 WatchSource:0}: Error finding container 888019cf5b102ee9289f49c0f1e027a185422473ed5e2163fc6c10451acdfef1: Status 404 returned error can't find the container with id 888019cf5b102ee9289f49c0f1e027a185422473ed5e2163fc6c10451acdfef1 Jan 29 12:19:50 crc kubenswrapper[4840]: I0129 12:19:50.062060 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:50 crc kubenswrapper[4840]: I0129 12:19:50.118059 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:50 crc kubenswrapper[4840]: I0129 12:19:50.123052 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sqtl4" event={"ID":"4f7befda-9fe2-4853-91ba-abefd8310a62","Type":"ContainerStarted","Data":"888019cf5b102ee9289f49c0f1e027a185422473ed5e2163fc6c10451acdfef1"} Jan 29 12:19:52 crc kubenswrapper[4840]: I0129 12:19:52.825667 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sqtl4"] Jan 29 12:19:53 crc kubenswrapper[4840]: I0129 12:19:53.147250 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sqtl4" event={"ID":"4f7befda-9fe2-4853-91ba-abefd8310a62","Type":"ContainerStarted","Data":"6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d"} Jan 29 12:19:53 crc kubenswrapper[4840]: I0129 12:19:53.166140 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-sqtl4" podStartSLOduration=1.437220376 podStartE2EDuration="4.166118912s" podCreationTimestamp="2026-01-29 12:19:49 +0000 UTC" firstStartedPulling="2026-01-29 12:19:49.879392251 +0000 UTC m=+921.542372154" lastFinishedPulling="2026-01-29 12:19:52.608290797 +0000 UTC m=+924.271270690" observedRunningTime="2026-01-29 12:19:53.165469475 +0000 UTC m=+924.828449368" watchObservedRunningTime="2026-01-29 12:19:53.166118912 +0000 UTC m=+924.829098805" Jan 29 12:19:53 crc kubenswrapper[4840]: I0129 12:19:53.430591 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-gg8jj"] Jan 29 12:19:53 crc kubenswrapper[4840]: I0129 12:19:53.431925 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gg8jj" Jan 29 12:19:53 crc kubenswrapper[4840]: I0129 12:19:53.440835 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gg8jj"] Jan 29 12:19:53 crc kubenswrapper[4840]: I0129 12:19:53.522613 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:19:53 crc kubenswrapper[4840]: I0129 12:19:53.522668 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:19:53 crc kubenswrapper[4840]: I0129 12:19:53.523585 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt4mk\" (UniqueName: \"kubernetes.io/projected/fffc9f72-ccc3-41d2-b10d-60465c7b85e7-kube-api-access-rt4mk\") pod \"openstack-operator-index-gg8jj\" (UID: \"fffc9f72-ccc3-41d2-b10d-60465c7b85e7\") " pod="openstack-operators/openstack-operator-index-gg8jj" Jan 29 12:19:53 crc kubenswrapper[4840]: I0129 12:19:53.625234 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt4mk\" (UniqueName: \"kubernetes.io/projected/fffc9f72-ccc3-41d2-b10d-60465c7b85e7-kube-api-access-rt4mk\") pod \"openstack-operator-index-gg8jj\" (UID: \"fffc9f72-ccc3-41d2-b10d-60465c7b85e7\") " pod="openstack-operators/openstack-operator-index-gg8jj" Jan 29 12:19:53 crc kubenswrapper[4840]: I0129 12:19:53.646320 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt4mk\" (UniqueName: \"kubernetes.io/projected/fffc9f72-ccc3-41d2-b10d-60465c7b85e7-kube-api-access-rt4mk\") pod \"openstack-operator-index-gg8jj\" (UID: \"fffc9f72-ccc3-41d2-b10d-60465c7b85e7\") " pod="openstack-operators/openstack-operator-index-gg8jj" Jan 29 12:19:53 crc kubenswrapper[4840]: I0129 12:19:53.754009 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-gg8jj" Jan 29 12:19:53 crc kubenswrapper[4840]: I0129 12:19:53.940991 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-gg8jj"] Jan 29 12:19:53 crc kubenswrapper[4840]: W0129 12:19:53.943438 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfffc9f72_ccc3_41d2_b10d_60465c7b85e7.slice/crio-4f5f589b8cf5c928a3405bb5d8dceccac5fff099bbcc0743297bc792f5776e3a WatchSource:0}: Error finding container 4f5f589b8cf5c928a3405bb5d8dceccac5fff099bbcc0743297bc792f5776e3a: Status 404 returned error can't find the container with id 4f5f589b8cf5c928a3405bb5d8dceccac5fff099bbcc0743297bc792f5776e3a Jan 29 12:19:54 crc kubenswrapper[4840]: I0129 12:19:54.156420 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-sqtl4" podUID="4f7befda-9fe2-4853-91ba-abefd8310a62" containerName="registry-server" containerID="cri-o://6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d" gracePeriod=2 Jan 29 12:19:54 crc kubenswrapper[4840]: I0129 12:19:54.157457 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gg8jj" event={"ID":"fffc9f72-ccc3-41d2-b10d-60465c7b85e7","Type":"ContainerStarted","Data":"4f5f589b8cf5c928a3405bb5d8dceccac5fff099bbcc0743297bc792f5776e3a"} Jan 29 12:19:54 crc kubenswrapper[4840]: I0129 12:19:54.469015 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sqtl4" Jan 29 12:19:54 crc kubenswrapper[4840]: I0129 12:19:54.539369 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmxfk\" (UniqueName: \"kubernetes.io/projected/4f7befda-9fe2-4853-91ba-abefd8310a62-kube-api-access-zmxfk\") pod \"4f7befda-9fe2-4853-91ba-abefd8310a62\" (UID: \"4f7befda-9fe2-4853-91ba-abefd8310a62\") " Jan 29 12:19:54 crc kubenswrapper[4840]: I0129 12:19:54.545110 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f7befda-9fe2-4853-91ba-abefd8310a62-kube-api-access-zmxfk" (OuterVolumeSpecName: "kube-api-access-zmxfk") pod "4f7befda-9fe2-4853-91ba-abefd8310a62" (UID: "4f7befda-9fe2-4853-91ba-abefd8310a62"). InnerVolumeSpecName "kube-api-access-zmxfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:19:54 crc kubenswrapper[4840]: I0129 12:19:54.641022 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmxfk\" (UniqueName: \"kubernetes.io/projected/4f7befda-9fe2-4853-91ba-abefd8310a62-kube-api-access-zmxfk\") on node \"crc\" DevicePath \"\"" Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.065253 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-jtlnm" Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.078527 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-k5x6q" Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.163115 4840 generic.go:334] "Generic (PLEG): container finished" podID="4f7befda-9fe2-4853-91ba-abefd8310a62" containerID="6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d" exitCode=0 Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.163187 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sqtl4" event={"ID":"4f7befda-9fe2-4853-91ba-abefd8310a62","Type":"ContainerDied","Data":"6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d"} Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.163196 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-sqtl4" Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.163239 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-sqtl4" event={"ID":"4f7befda-9fe2-4853-91ba-abefd8310a62","Type":"ContainerDied","Data":"888019cf5b102ee9289f49c0f1e027a185422473ed5e2163fc6c10451acdfef1"} Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.163261 4840 scope.go:117] "RemoveContainer" containerID="6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d" Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.166283 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-gg8jj" event={"ID":"fffc9f72-ccc3-41d2-b10d-60465c7b85e7","Type":"ContainerStarted","Data":"f116857fe3b5c375c39a3d2dd67de8e7a12f4e1aea6448d8932abab27b705352"} Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.183657 4840 scope.go:117] "RemoveContainer" containerID="6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d" Jan 29 12:19:55 crc kubenswrapper[4840]: E0129 12:19:55.184428 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d\": container with ID starting with 6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d not found: ID does not exist" containerID="6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d" Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.184482 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d"} err="failed to get container status \"6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d\": rpc error: code = NotFound desc = could not find container \"6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d\": container with ID starting with 6ee091ebc37342f85cae5ac3b5e1d0f69d641760d7f4629b3e4954097f4c809d not found: ID does not exist" Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.186268 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-sqtl4"] Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.190562 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-sqtl4"] Jan 29 12:19:55 crc kubenswrapper[4840]: I0129 12:19:55.200535 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-gg8jj" podStartSLOduration=2.149893489 podStartE2EDuration="2.20051509s" podCreationTimestamp="2026-01-29 12:19:53 +0000 UTC" firstStartedPulling="2026-01-29 12:19:53.947155158 +0000 UTC m=+925.610135051" lastFinishedPulling="2026-01-29 12:19:53.997776759 +0000 UTC m=+925.660756652" observedRunningTime="2026-01-29 12:19:55.194352835 +0000 UTC m=+926.857332748" watchObservedRunningTime="2026-01-29 12:19:55.20051509 +0000 UTC m=+926.863494983" Jan 29 12:19:57 crc kubenswrapper[4840]: I0129 12:19:57.009659 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f7befda-9fe2-4853-91ba-abefd8310a62" path="/var/lib/kubelet/pods/4f7befda-9fe2-4853-91ba-abefd8310a62/volumes" Jan 29 12:20:03 crc kubenswrapper[4840]: I0129 12:20:03.754474 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-gg8jj" Jan 29 12:20:03 crc kubenswrapper[4840]: I0129 12:20:03.756885 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-gg8jj" Jan 29 12:20:03 crc kubenswrapper[4840]: I0129 12:20:03.797967 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-gg8jj" Jan 29 12:20:04 crc kubenswrapper[4840]: I0129 12:20:04.247640 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-gg8jj" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.306180 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql"] Jan 29 12:20:10 crc kubenswrapper[4840]: E0129 12:20:10.307045 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f7befda-9fe2-4853-91ba-abefd8310a62" containerName="registry-server" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.307070 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f7befda-9fe2-4853-91ba-abefd8310a62" containerName="registry-server" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.307247 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f7befda-9fe2-4853-91ba-abefd8310a62" containerName="registry-server" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.308473 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.312402 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-t6jd2" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.323470 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql"] Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.364799 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4b8f\" (UniqueName: \"kubernetes.io/projected/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-kube-api-access-g4b8f\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql\" (UID: \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.364931 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-util\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql\" (UID: \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.364972 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-bundle\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql\" (UID: \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.466329 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-util\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql\" (UID: \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.466389 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-bundle\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql\" (UID: \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.466430 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4b8f\" (UniqueName: \"kubernetes.io/projected/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-kube-api-access-g4b8f\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql\" (UID: \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.467625 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-util\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql\" (UID: \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.467893 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-bundle\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql\" (UID: \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.489330 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4b8f\" (UniqueName: \"kubernetes.io/projected/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-kube-api-access-g4b8f\") pod \"c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql\" (UID: \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\") " pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.640242 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:10 crc kubenswrapper[4840]: I0129 12:20:10.963682 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql"] Jan 29 12:20:11 crc kubenswrapper[4840]: I0129 12:20:11.275241 4840 generic.go:334] "Generic (PLEG): container finished" podID="1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" containerID="bd7e1eda47b18c25597ab42357cd962412c8278a503fcdf398f2cc33688b5fa4" exitCode=0 Jan 29 12:20:11 crc kubenswrapper[4840]: I0129 12:20:11.275306 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" event={"ID":"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f","Type":"ContainerDied","Data":"bd7e1eda47b18c25597ab42357cd962412c8278a503fcdf398f2cc33688b5fa4"} Jan 29 12:20:11 crc kubenswrapper[4840]: I0129 12:20:11.275347 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" event={"ID":"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f","Type":"ContainerStarted","Data":"f858761b2ac753ef09ffbcf96f0ce4f309b435bc87dfbf74fb4b2a0463ca399d"} Jan 29 12:20:13 crc kubenswrapper[4840]: I0129 12:20:13.291452 4840 generic.go:334] "Generic (PLEG): container finished" podID="1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" containerID="9619b9f8a03b1a0c2f85b2f38527fa7d2821adbea76153e9ca5f9ff1a374c846" exitCode=0 Jan 29 12:20:13 crc kubenswrapper[4840]: I0129 12:20:13.291552 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" event={"ID":"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f","Type":"ContainerDied","Data":"9619b9f8a03b1a0c2f85b2f38527fa7d2821adbea76153e9ca5f9ff1a374c846"} Jan 29 12:20:14 crc kubenswrapper[4840]: I0129 12:20:14.299319 4840 generic.go:334] "Generic (PLEG): container finished" podID="1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" containerID="fecc673c3b19d5cd1f173765aa39dd5bd8a6702af15f209265820a8ef1deb8ab" exitCode=0 Jan 29 12:20:14 crc kubenswrapper[4840]: I0129 12:20:14.299405 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" event={"ID":"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f","Type":"ContainerDied","Data":"fecc673c3b19d5cd1f173765aa39dd5bd8a6702af15f209265820a8ef1deb8ab"} Jan 29 12:20:15 crc kubenswrapper[4840]: I0129 12:20:15.552667 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:15 crc kubenswrapper[4840]: I0129 12:20:15.647525 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-util\") pod \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\" (UID: \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\") " Jan 29 12:20:15 crc kubenswrapper[4840]: I0129 12:20:15.647930 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-bundle\") pod \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\" (UID: \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\") " Jan 29 12:20:15 crc kubenswrapper[4840]: I0129 12:20:15.647990 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4b8f\" (UniqueName: \"kubernetes.io/projected/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-kube-api-access-g4b8f\") pod \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\" (UID: \"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f\") " Jan 29 12:20:15 crc kubenswrapper[4840]: I0129 12:20:15.648443 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-bundle" (OuterVolumeSpecName: "bundle") pod "1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" (UID: "1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:20:15 crc kubenswrapper[4840]: I0129 12:20:15.653517 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-kube-api-access-g4b8f" (OuterVolumeSpecName: "kube-api-access-g4b8f") pod "1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" (UID: "1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f"). InnerVolumeSpecName "kube-api-access-g4b8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:20:15 crc kubenswrapper[4840]: I0129 12:20:15.669258 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-util" (OuterVolumeSpecName: "util") pod "1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" (UID: "1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:20:15 crc kubenswrapper[4840]: I0129 12:20:15.749556 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4b8f\" (UniqueName: \"kubernetes.io/projected/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-kube-api-access-g4b8f\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:15 crc kubenswrapper[4840]: I0129 12:20:15.749589 4840 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-util\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:15 crc kubenswrapper[4840]: I0129 12:20:15.749599 4840 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f-bundle\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:16 crc kubenswrapper[4840]: I0129 12:20:16.314829 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" event={"ID":"1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f","Type":"ContainerDied","Data":"f858761b2ac753ef09ffbcf96f0ce4f309b435bc87dfbf74fb4b2a0463ca399d"} Jan 29 12:20:16 crc kubenswrapper[4840]: I0129 12:20:16.314879 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f858761b2ac753ef09ffbcf96f0ce4f309b435bc87dfbf74fb4b2a0463ca399d" Jan 29 12:20:16 crc kubenswrapper[4840]: I0129 12:20:16.314981 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql" Jan 29 12:20:23 crc kubenswrapper[4840]: I0129 12:20:23.522043 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:20:23 crc kubenswrapper[4840]: I0129 12:20:23.522556 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:20:24 crc kubenswrapper[4840]: I0129 12:20:24.506961 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv"] Jan 29 12:20:24 crc kubenswrapper[4840]: E0129 12:20:24.507563 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" containerName="util" Jan 29 12:20:24 crc kubenswrapper[4840]: I0129 12:20:24.507580 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" containerName="util" Jan 29 12:20:24 crc kubenswrapper[4840]: E0129 12:20:24.507597 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" containerName="extract" Jan 29 12:20:24 crc kubenswrapper[4840]: I0129 12:20:24.507606 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" containerName="extract" Jan 29 12:20:24 crc kubenswrapper[4840]: E0129 12:20:24.507621 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" containerName="pull" Jan 29 12:20:24 crc kubenswrapper[4840]: I0129 12:20:24.507627 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" containerName="pull" Jan 29 12:20:24 crc kubenswrapper[4840]: I0129 12:20:24.507765 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f" containerName="extract" Jan 29 12:20:24 crc kubenswrapper[4840]: I0129 12:20:24.508328 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv" Jan 29 12:20:24 crc kubenswrapper[4840]: I0129 12:20:24.510322 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-mrm2l" Jan 29 12:20:24 crc kubenswrapper[4840]: I0129 12:20:24.547393 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv"] Jan 29 12:20:24 crc kubenswrapper[4840]: I0129 12:20:24.568099 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrtpr\" (UniqueName: \"kubernetes.io/projected/a1c64bd2-0b59-4e9e-aab6-16836c7b117e-kube-api-access-vrtpr\") pod \"openstack-operator-controller-init-59c8666fb5-rksqv\" (UID: \"a1c64bd2-0b59-4e9e-aab6-16836c7b117e\") " pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv" Jan 29 12:20:24 crc kubenswrapper[4840]: I0129 12:20:24.669472 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrtpr\" (UniqueName: \"kubernetes.io/projected/a1c64bd2-0b59-4e9e-aab6-16836c7b117e-kube-api-access-vrtpr\") pod \"openstack-operator-controller-init-59c8666fb5-rksqv\" (UID: \"a1c64bd2-0b59-4e9e-aab6-16836c7b117e\") " pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv" Jan 29 12:20:24 crc kubenswrapper[4840]: I0129 12:20:24.690429 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrtpr\" (UniqueName: \"kubernetes.io/projected/a1c64bd2-0b59-4e9e-aab6-16836c7b117e-kube-api-access-vrtpr\") pod \"openstack-operator-controller-init-59c8666fb5-rksqv\" (UID: \"a1c64bd2-0b59-4e9e-aab6-16836c7b117e\") " pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv" Jan 29 12:20:24 crc kubenswrapper[4840]: I0129 12:20:24.825574 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv" Jan 29 12:20:25 crc kubenswrapper[4840]: I0129 12:20:25.084909 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv"] Jan 29 12:20:25 crc kubenswrapper[4840]: I0129 12:20:25.370562 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv" event={"ID":"a1c64bd2-0b59-4e9e-aab6-16836c7b117e","Type":"ContainerStarted","Data":"7c7ebbfc065bece6bd4179671a09a8bd2a05208a510cde3339e11700d0952183"} Jan 29 12:20:29 crc kubenswrapper[4840]: I0129 12:20:29.394833 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv" event={"ID":"a1c64bd2-0b59-4e9e-aab6-16836c7b117e","Type":"ContainerStarted","Data":"b903bffac1ab738f098946f47bc9a9b12da40ff323330452657e884d2d5a5173"} Jan 29 12:20:29 crc kubenswrapper[4840]: I0129 12:20:29.395424 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv" Jan 29 12:20:29 crc kubenswrapper[4840]: I0129 12:20:29.429654 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv" podStartSLOduration=1.7244102209999999 podStartE2EDuration="5.42961799s" podCreationTimestamp="2026-01-29 12:20:24 +0000 UTC" firstStartedPulling="2026-01-29 12:20:25.098451929 +0000 UTC m=+956.761431822" lastFinishedPulling="2026-01-29 12:20:28.803659698 +0000 UTC m=+960.466639591" observedRunningTime="2026-01-29 12:20:29.425012517 +0000 UTC m=+961.087992410" watchObservedRunningTime="2026-01-29 12:20:29.42961799 +0000 UTC m=+961.092597883" Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.412394 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7vfjj"] Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.413868 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.423330 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7vfjj"] Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.496806 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e289ed5-33ce-49e5-a651-b4897f3e639a-catalog-content\") pod \"community-operators-7vfjj\" (UID: \"6e289ed5-33ce-49e5-a651-b4897f3e639a\") " pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.496992 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8sqh\" (UniqueName: \"kubernetes.io/projected/6e289ed5-33ce-49e5-a651-b4897f3e639a-kube-api-access-z8sqh\") pod \"community-operators-7vfjj\" (UID: \"6e289ed5-33ce-49e5-a651-b4897f3e639a\") " pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.497036 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e289ed5-33ce-49e5-a651-b4897f3e639a-utilities\") pod \"community-operators-7vfjj\" (UID: \"6e289ed5-33ce-49e5-a651-b4897f3e639a\") " pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.598679 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e289ed5-33ce-49e5-a651-b4897f3e639a-utilities\") pod \"community-operators-7vfjj\" (UID: \"6e289ed5-33ce-49e5-a651-b4897f3e639a\") " pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.598768 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e289ed5-33ce-49e5-a651-b4897f3e639a-catalog-content\") pod \"community-operators-7vfjj\" (UID: \"6e289ed5-33ce-49e5-a651-b4897f3e639a\") " pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.598818 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8sqh\" (UniqueName: \"kubernetes.io/projected/6e289ed5-33ce-49e5-a651-b4897f3e639a-kube-api-access-z8sqh\") pod \"community-operators-7vfjj\" (UID: \"6e289ed5-33ce-49e5-a651-b4897f3e639a\") " pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.599336 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e289ed5-33ce-49e5-a651-b4897f3e639a-utilities\") pod \"community-operators-7vfjj\" (UID: \"6e289ed5-33ce-49e5-a651-b4897f3e639a\") " pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.599349 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e289ed5-33ce-49e5-a651-b4897f3e639a-catalog-content\") pod \"community-operators-7vfjj\" (UID: \"6e289ed5-33ce-49e5-a651-b4897f3e639a\") " pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.625966 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8sqh\" (UniqueName: \"kubernetes.io/projected/6e289ed5-33ce-49e5-a651-b4897f3e639a-kube-api-access-z8sqh\") pod \"community-operators-7vfjj\" (UID: \"6e289ed5-33ce-49e5-a651-b4897f3e639a\") " pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:32 crc kubenswrapper[4840]: I0129 12:20:32.734019 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:33 crc kubenswrapper[4840]: I0129 12:20:33.039045 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7vfjj"] Jan 29 12:20:33 crc kubenswrapper[4840]: W0129 12:20:33.042873 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e289ed5_33ce_49e5_a651_b4897f3e639a.slice/crio-6597719d997b2e4c6351d3c42dd0340713dfc0348a592791e8ca50519b8a4e27 WatchSource:0}: Error finding container 6597719d997b2e4c6351d3c42dd0340713dfc0348a592791e8ca50519b8a4e27: Status 404 returned error can't find the container with id 6597719d997b2e4c6351d3c42dd0340713dfc0348a592791e8ca50519b8a4e27 Jan 29 12:20:33 crc kubenswrapper[4840]: I0129 12:20:33.418940 4840 generic.go:334] "Generic (PLEG): container finished" podID="6e289ed5-33ce-49e5-a651-b4897f3e639a" containerID="d74573eb64fa683d0af9042ebdec6ec244ee695fca4e417a525ddc7e4705673c" exitCode=0 Jan 29 12:20:33 crc kubenswrapper[4840]: I0129 12:20:33.419022 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vfjj" event={"ID":"6e289ed5-33ce-49e5-a651-b4897f3e639a","Type":"ContainerDied","Data":"d74573eb64fa683d0af9042ebdec6ec244ee695fca4e417a525ddc7e4705673c"} Jan 29 12:20:33 crc kubenswrapper[4840]: I0129 12:20:33.419538 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vfjj" event={"ID":"6e289ed5-33ce-49e5-a651-b4897f3e639a","Type":"ContainerStarted","Data":"6597719d997b2e4c6351d3c42dd0340713dfc0348a592791e8ca50519b8a4e27"} Jan 29 12:20:34 crc kubenswrapper[4840]: I0129 12:20:34.828261 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-59c8666fb5-rksqv" Jan 29 12:20:35 crc kubenswrapper[4840]: I0129 12:20:35.433460 4840 generic.go:334] "Generic (PLEG): container finished" podID="6e289ed5-33ce-49e5-a651-b4897f3e639a" containerID="54f0b09e675d311c095bb3ff514c363265ff2afaecb47670f5596695ca063c33" exitCode=0 Jan 29 12:20:35 crc kubenswrapper[4840]: I0129 12:20:35.433507 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vfjj" event={"ID":"6e289ed5-33ce-49e5-a651-b4897f3e639a","Type":"ContainerDied","Data":"54f0b09e675d311c095bb3ff514c363265ff2afaecb47670f5596695ca063c33"} Jan 29 12:20:36 crc kubenswrapper[4840]: I0129 12:20:36.448813 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vfjj" event={"ID":"6e289ed5-33ce-49e5-a651-b4897f3e639a","Type":"ContainerStarted","Data":"2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957"} Jan 29 12:20:36 crc kubenswrapper[4840]: I0129 12:20:36.473686 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7vfjj" podStartSLOduration=2.061838797 podStartE2EDuration="4.4736699s" podCreationTimestamp="2026-01-29 12:20:32 +0000 UTC" firstStartedPulling="2026-01-29 12:20:33.422729186 +0000 UTC m=+965.085709079" lastFinishedPulling="2026-01-29 12:20:35.834560289 +0000 UTC m=+967.497540182" observedRunningTime="2026-01-29 12:20:36.471219986 +0000 UTC m=+968.134199879" watchObservedRunningTime="2026-01-29 12:20:36.4736699 +0000 UTC m=+968.136649793" Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.481635 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pzsdb"] Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.483089 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.491640 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pzsdb"] Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.597542 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-utilities\") pod \"certified-operators-pzsdb\" (UID: \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\") " pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.597697 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sf2z\" (UniqueName: \"kubernetes.io/projected/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-kube-api-access-8sf2z\") pod \"certified-operators-pzsdb\" (UID: \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\") " pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.597770 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-catalog-content\") pod \"certified-operators-pzsdb\" (UID: \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\") " pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.699367 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sf2z\" (UniqueName: \"kubernetes.io/projected/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-kube-api-access-8sf2z\") pod \"certified-operators-pzsdb\" (UID: \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\") " pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.699449 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-catalog-content\") pod \"certified-operators-pzsdb\" (UID: \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\") " pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.699528 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-utilities\") pod \"certified-operators-pzsdb\" (UID: \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\") " pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.700265 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-catalog-content\") pod \"certified-operators-pzsdb\" (UID: \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\") " pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.700418 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-utilities\") pod \"certified-operators-pzsdb\" (UID: \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\") " pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.721044 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sf2z\" (UniqueName: \"kubernetes.io/projected/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-kube-api-access-8sf2z\") pod \"certified-operators-pzsdb\" (UID: \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\") " pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:38 crc kubenswrapper[4840]: I0129 12:20:38.800178 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:39 crc kubenswrapper[4840]: I0129 12:20:39.140254 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pzsdb"] Jan 29 12:20:39 crc kubenswrapper[4840]: W0129 12:20:39.154470 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf08d3f4_a913_4be2_9b83_4e3ab6ddcb16.slice/crio-e742ec52d8ee6991e921d1f540bf136127507ee60e502d22d1912dc64861ee10 WatchSource:0}: Error finding container e742ec52d8ee6991e921d1f540bf136127507ee60e502d22d1912dc64861ee10: Status 404 returned error can't find the container with id e742ec52d8ee6991e921d1f540bf136127507ee60e502d22d1912dc64861ee10 Jan 29 12:20:39 crc kubenswrapper[4840]: I0129 12:20:39.466009 4840 generic.go:334] "Generic (PLEG): container finished" podID="bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" containerID="56d114194001f22cc8b3753603ef5ba286895ce64f119fb6ec803b4d6612cdee" exitCode=0 Jan 29 12:20:39 crc kubenswrapper[4840]: I0129 12:20:39.466056 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzsdb" event={"ID":"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16","Type":"ContainerDied","Data":"56d114194001f22cc8b3753603ef5ba286895ce64f119fb6ec803b4d6612cdee"} Jan 29 12:20:39 crc kubenswrapper[4840]: I0129 12:20:39.466082 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzsdb" event={"ID":"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16","Type":"ContainerStarted","Data":"e742ec52d8ee6991e921d1f540bf136127507ee60e502d22d1912dc64861ee10"} Jan 29 12:20:41 crc kubenswrapper[4840]: I0129 12:20:41.478863 4840 generic.go:334] "Generic (PLEG): container finished" podID="bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" containerID="20f063c93e91e7d4426260e7f382b17a729acea58664ee00227fecdcff0b0411" exitCode=0 Jan 29 12:20:41 crc kubenswrapper[4840]: I0129 12:20:41.478939 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzsdb" event={"ID":"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16","Type":"ContainerDied","Data":"20f063c93e91e7d4426260e7f382b17a729acea58664ee00227fecdcff0b0411"} Jan 29 12:20:42 crc kubenswrapper[4840]: I0129 12:20:42.488304 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzsdb" event={"ID":"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16","Type":"ContainerStarted","Data":"064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c"} Jan 29 12:20:42 crc kubenswrapper[4840]: I0129 12:20:42.513421 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pzsdb" podStartSLOduration=2.071368671 podStartE2EDuration="4.513400109s" podCreationTimestamp="2026-01-29 12:20:38 +0000 UTC" firstStartedPulling="2026-01-29 12:20:39.467789099 +0000 UTC m=+971.130768992" lastFinishedPulling="2026-01-29 12:20:41.909820537 +0000 UTC m=+973.572800430" observedRunningTime="2026-01-29 12:20:42.510153802 +0000 UTC m=+974.173133715" watchObservedRunningTime="2026-01-29 12:20:42.513400109 +0000 UTC m=+974.176380002" Jan 29 12:20:42 crc kubenswrapper[4840]: I0129 12:20:42.734665 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:42 crc kubenswrapper[4840]: I0129 12:20:42.734705 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:42 crc kubenswrapper[4840]: I0129 12:20:42.777613 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:43 crc kubenswrapper[4840]: I0129 12:20:43.543554 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:44 crc kubenswrapper[4840]: I0129 12:20:44.674181 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7vfjj"] Jan 29 12:20:45 crc kubenswrapper[4840]: I0129 12:20:45.505753 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7vfjj" podUID="6e289ed5-33ce-49e5-a651-b4897f3e639a" containerName="registry-server" containerID="cri-o://2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957" gracePeriod=2 Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.451919 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.507674 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8sqh\" (UniqueName: \"kubernetes.io/projected/6e289ed5-33ce-49e5-a651-b4897f3e639a-kube-api-access-z8sqh\") pod \"6e289ed5-33ce-49e5-a651-b4897f3e639a\" (UID: \"6e289ed5-33ce-49e5-a651-b4897f3e639a\") " Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.507743 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e289ed5-33ce-49e5-a651-b4897f3e639a-utilities\") pod \"6e289ed5-33ce-49e5-a651-b4897f3e639a\" (UID: \"6e289ed5-33ce-49e5-a651-b4897f3e639a\") " Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.507796 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e289ed5-33ce-49e5-a651-b4897f3e639a-catalog-content\") pod \"6e289ed5-33ce-49e5-a651-b4897f3e639a\" (UID: \"6e289ed5-33ce-49e5-a651-b4897f3e639a\") " Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.509182 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e289ed5-33ce-49e5-a651-b4897f3e639a-utilities" (OuterVolumeSpecName: "utilities") pod "6e289ed5-33ce-49e5-a651-b4897f3e639a" (UID: "6e289ed5-33ce-49e5-a651-b4897f3e639a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.514227 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e289ed5-33ce-49e5-a651-b4897f3e639a-kube-api-access-z8sqh" (OuterVolumeSpecName: "kube-api-access-z8sqh") pod "6e289ed5-33ce-49e5-a651-b4897f3e639a" (UID: "6e289ed5-33ce-49e5-a651-b4897f3e639a"). InnerVolumeSpecName "kube-api-access-z8sqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.516717 4840 generic.go:334] "Generic (PLEG): container finished" podID="6e289ed5-33ce-49e5-a651-b4897f3e639a" containerID="2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957" exitCode=0 Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.516768 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vfjj" event={"ID":"6e289ed5-33ce-49e5-a651-b4897f3e639a","Type":"ContainerDied","Data":"2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957"} Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.516798 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7vfjj" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.516813 4840 scope.go:117] "RemoveContainer" containerID="2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.516801 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7vfjj" event={"ID":"6e289ed5-33ce-49e5-a651-b4897f3e639a","Type":"ContainerDied","Data":"6597719d997b2e4c6351d3c42dd0340713dfc0348a592791e8ca50519b8a4e27"} Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.545909 4840 scope.go:117] "RemoveContainer" containerID="54f0b09e675d311c095bb3ff514c363265ff2afaecb47670f5596695ca063c33" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.567183 4840 scope.go:117] "RemoveContainer" containerID="d74573eb64fa683d0af9042ebdec6ec244ee695fca4e417a525ddc7e4705673c" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.585298 4840 scope.go:117] "RemoveContainer" containerID="2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957" Jan 29 12:20:46 crc kubenswrapper[4840]: E0129 12:20:46.585763 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957\": container with ID starting with 2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957 not found: ID does not exist" containerID="2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.585791 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957"} err="failed to get container status \"2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957\": rpc error: code = NotFound desc = could not find container \"2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957\": container with ID starting with 2c07fb32522bd321a735ff3b9717e5f4c05fb56604dc0e222860e33e85b29957 not found: ID does not exist" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.585815 4840 scope.go:117] "RemoveContainer" containerID="54f0b09e675d311c095bb3ff514c363265ff2afaecb47670f5596695ca063c33" Jan 29 12:20:46 crc kubenswrapper[4840]: E0129 12:20:46.586273 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f0b09e675d311c095bb3ff514c363265ff2afaecb47670f5596695ca063c33\": container with ID starting with 54f0b09e675d311c095bb3ff514c363265ff2afaecb47670f5596695ca063c33 not found: ID does not exist" containerID="54f0b09e675d311c095bb3ff514c363265ff2afaecb47670f5596695ca063c33" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.586348 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f0b09e675d311c095bb3ff514c363265ff2afaecb47670f5596695ca063c33"} err="failed to get container status \"54f0b09e675d311c095bb3ff514c363265ff2afaecb47670f5596695ca063c33\": rpc error: code = NotFound desc = could not find container \"54f0b09e675d311c095bb3ff514c363265ff2afaecb47670f5596695ca063c33\": container with ID starting with 54f0b09e675d311c095bb3ff514c363265ff2afaecb47670f5596695ca063c33 not found: ID does not exist" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.586390 4840 scope.go:117] "RemoveContainer" containerID="d74573eb64fa683d0af9042ebdec6ec244ee695fca4e417a525ddc7e4705673c" Jan 29 12:20:46 crc kubenswrapper[4840]: E0129 12:20:46.586779 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d74573eb64fa683d0af9042ebdec6ec244ee695fca4e417a525ddc7e4705673c\": container with ID starting with d74573eb64fa683d0af9042ebdec6ec244ee695fca4e417a525ddc7e4705673c not found: ID does not exist" containerID="d74573eb64fa683d0af9042ebdec6ec244ee695fca4e417a525ddc7e4705673c" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.586803 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d74573eb64fa683d0af9042ebdec6ec244ee695fca4e417a525ddc7e4705673c"} err="failed to get container status \"d74573eb64fa683d0af9042ebdec6ec244ee695fca4e417a525ddc7e4705673c\": rpc error: code = NotFound desc = could not find container \"d74573eb64fa683d0af9042ebdec6ec244ee695fca4e417a525ddc7e4705673c\": container with ID starting with d74573eb64fa683d0af9042ebdec6ec244ee695fca4e417a525ddc7e4705673c not found: ID does not exist" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.610105 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8sqh\" (UniqueName: \"kubernetes.io/projected/6e289ed5-33ce-49e5-a651-b4897f3e639a-kube-api-access-z8sqh\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:46 crc kubenswrapper[4840]: I0129 12:20:46.610137 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e289ed5-33ce-49e5-a651-b4897f3e639a-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:48 crc kubenswrapper[4840]: I0129 12:20:48.292231 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e289ed5-33ce-49e5-a651-b4897f3e639a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e289ed5-33ce-49e5-a651-b4897f3e639a" (UID: "6e289ed5-33ce-49e5-a651-b4897f3e639a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:20:48 crc kubenswrapper[4840]: I0129 12:20:48.331358 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e289ed5-33ce-49e5-a651-b4897f3e639a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:48 crc kubenswrapper[4840]: I0129 12:20:48.345919 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7vfjj"] Jan 29 12:20:48 crc kubenswrapper[4840]: I0129 12:20:48.349830 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7vfjj"] Jan 29 12:20:48 crc kubenswrapper[4840]: I0129 12:20:48.801375 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:48 crc kubenswrapper[4840]: I0129 12:20:48.801421 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:48 crc kubenswrapper[4840]: I0129 12:20:48.844164 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:49 crc kubenswrapper[4840]: I0129 12:20:49.013599 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e289ed5-33ce-49e5-a651-b4897f3e639a" path="/var/lib/kubelet/pods/6e289ed5-33ce-49e5-a651-b4897f3e639a/volumes" Jan 29 12:20:49 crc kubenswrapper[4840]: I0129 12:20:49.583015 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.683701 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xrff6"] Jan 29 12:20:50 crc kubenswrapper[4840]: E0129 12:20:50.683974 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e289ed5-33ce-49e5-a651-b4897f3e639a" containerName="registry-server" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.683985 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e289ed5-33ce-49e5-a651-b4897f3e639a" containerName="registry-server" Jan 29 12:20:50 crc kubenswrapper[4840]: E0129 12:20:50.683998 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e289ed5-33ce-49e5-a651-b4897f3e639a" containerName="extract-utilities" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.684006 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e289ed5-33ce-49e5-a651-b4897f3e639a" containerName="extract-utilities" Jan 29 12:20:50 crc kubenswrapper[4840]: E0129 12:20:50.684014 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e289ed5-33ce-49e5-a651-b4897f3e639a" containerName="extract-content" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.684020 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e289ed5-33ce-49e5-a651-b4897f3e639a" containerName="extract-content" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.684129 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e289ed5-33ce-49e5-a651-b4897f3e639a" containerName="registry-server" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.685332 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.699598 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrff6"] Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.768716 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmkdl\" (UniqueName: \"kubernetes.io/projected/2733e729-ef23-4660-94da-543b7351c382-kube-api-access-mmkdl\") pod \"redhat-marketplace-xrff6\" (UID: \"2733e729-ef23-4660-94da-543b7351c382\") " pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.769278 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2733e729-ef23-4660-94da-543b7351c382-catalog-content\") pod \"redhat-marketplace-xrff6\" (UID: \"2733e729-ef23-4660-94da-543b7351c382\") " pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.769327 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2733e729-ef23-4660-94da-543b7351c382-utilities\") pod \"redhat-marketplace-xrff6\" (UID: \"2733e729-ef23-4660-94da-543b7351c382\") " pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.870754 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2733e729-ef23-4660-94da-543b7351c382-catalog-content\") pod \"redhat-marketplace-xrff6\" (UID: \"2733e729-ef23-4660-94da-543b7351c382\") " pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.870850 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2733e729-ef23-4660-94da-543b7351c382-utilities\") pod \"redhat-marketplace-xrff6\" (UID: \"2733e729-ef23-4660-94da-543b7351c382\") " pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.870969 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmkdl\" (UniqueName: \"kubernetes.io/projected/2733e729-ef23-4660-94da-543b7351c382-kube-api-access-mmkdl\") pod \"redhat-marketplace-xrff6\" (UID: \"2733e729-ef23-4660-94da-543b7351c382\") " pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.871854 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2733e729-ef23-4660-94da-543b7351c382-catalog-content\") pod \"redhat-marketplace-xrff6\" (UID: \"2733e729-ef23-4660-94da-543b7351c382\") " pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.871928 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2733e729-ef23-4660-94da-543b7351c382-utilities\") pod \"redhat-marketplace-xrff6\" (UID: \"2733e729-ef23-4660-94da-543b7351c382\") " pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:20:50 crc kubenswrapper[4840]: I0129 12:20:50.895913 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmkdl\" (UniqueName: \"kubernetes.io/projected/2733e729-ef23-4660-94da-543b7351c382-kube-api-access-mmkdl\") pod \"redhat-marketplace-xrff6\" (UID: \"2733e729-ef23-4660-94da-543b7351c382\") " pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:20:51 crc kubenswrapper[4840]: I0129 12:20:51.008374 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:20:51 crc kubenswrapper[4840]: I0129 12:20:51.484998 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrff6"] Jan 29 12:20:51 crc kubenswrapper[4840]: I0129 12:20:51.558902 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrff6" event={"ID":"2733e729-ef23-4660-94da-543b7351c382","Type":"ContainerStarted","Data":"a3e8d49f0a009826950520a3e72a55177ea4b2e33e12d456788ed377f8574c2a"} Jan 29 12:20:51 crc kubenswrapper[4840]: I0129 12:20:51.674312 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pzsdb"] Jan 29 12:20:51 crc kubenswrapper[4840]: I0129 12:20:51.674535 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pzsdb" podUID="bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" containerName="registry-server" containerID="cri-o://064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c" gracePeriod=2 Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.075891 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.191509 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-catalog-content\") pod \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\" (UID: \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\") " Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.191620 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sf2z\" (UniqueName: \"kubernetes.io/projected/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-kube-api-access-8sf2z\") pod \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\" (UID: \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\") " Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.191869 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-utilities\") pod \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\" (UID: \"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16\") " Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.193043 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-utilities" (OuterVolumeSpecName: "utilities") pod "bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" (UID: "bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.194178 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.200424 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-kube-api-access-8sf2z" (OuterVolumeSpecName: "kube-api-access-8sf2z") pod "bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" (UID: "bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16"). InnerVolumeSpecName "kube-api-access-8sf2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.245275 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" (UID: "bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.295172 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.295548 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sf2z\" (UniqueName: \"kubernetes.io/projected/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16-kube-api-access-8sf2z\") on node \"crc\" DevicePath \"\"" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.574126 4840 generic.go:334] "Generic (PLEG): container finished" podID="bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" containerID="064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c" exitCode=0 Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.574147 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzsdb" event={"ID":"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16","Type":"ContainerDied","Data":"064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c"} Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.574255 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pzsdb" event={"ID":"bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16","Type":"ContainerDied","Data":"e742ec52d8ee6991e921d1f540bf136127507ee60e502d22d1912dc64861ee10"} Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.574288 4840 scope.go:117] "RemoveContainer" containerID="064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.575338 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pzsdb" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.578609 4840 generic.go:334] "Generic (PLEG): container finished" podID="2733e729-ef23-4660-94da-543b7351c382" containerID="217f3136006eefd6724dc3ce8c11c9eca8f23a4fa0afd84d962330d01b7e0975" exitCode=0 Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.578675 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrff6" event={"ID":"2733e729-ef23-4660-94da-543b7351c382","Type":"ContainerDied","Data":"217f3136006eefd6724dc3ce8c11c9eca8f23a4fa0afd84d962330d01b7e0975"} Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.620876 4840 scope.go:117] "RemoveContainer" containerID="20f063c93e91e7d4426260e7f382b17a729acea58664ee00227fecdcff0b0411" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.656814 4840 scope.go:117] "RemoveContainer" containerID="56d114194001f22cc8b3753603ef5ba286895ce64f119fb6ec803b4d6612cdee" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.660994 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pzsdb"] Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.669623 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pzsdb"] Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.674378 4840 scope.go:117] "RemoveContainer" containerID="064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c" Jan 29 12:20:52 crc kubenswrapper[4840]: E0129 12:20:52.674772 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c\": container with ID starting with 064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c not found: ID does not exist" containerID="064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.674811 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c"} err="failed to get container status \"064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c\": rpc error: code = NotFound desc = could not find container \"064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c\": container with ID starting with 064b122b719e2c4e0836a505f0195f86d385d008ef6cd3332c599f3a6174506c not found: ID does not exist" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.674837 4840 scope.go:117] "RemoveContainer" containerID="20f063c93e91e7d4426260e7f382b17a729acea58664ee00227fecdcff0b0411" Jan 29 12:20:52 crc kubenswrapper[4840]: E0129 12:20:52.675459 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20f063c93e91e7d4426260e7f382b17a729acea58664ee00227fecdcff0b0411\": container with ID starting with 20f063c93e91e7d4426260e7f382b17a729acea58664ee00227fecdcff0b0411 not found: ID does not exist" containerID="20f063c93e91e7d4426260e7f382b17a729acea58664ee00227fecdcff0b0411" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.675583 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20f063c93e91e7d4426260e7f382b17a729acea58664ee00227fecdcff0b0411"} err="failed to get container status \"20f063c93e91e7d4426260e7f382b17a729acea58664ee00227fecdcff0b0411\": rpc error: code = NotFound desc = could not find container \"20f063c93e91e7d4426260e7f382b17a729acea58664ee00227fecdcff0b0411\": container with ID starting with 20f063c93e91e7d4426260e7f382b17a729acea58664ee00227fecdcff0b0411 not found: ID does not exist" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.675693 4840 scope.go:117] "RemoveContainer" containerID="56d114194001f22cc8b3753603ef5ba286895ce64f119fb6ec803b4d6612cdee" Jan 29 12:20:52 crc kubenswrapper[4840]: E0129 12:20:52.676126 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56d114194001f22cc8b3753603ef5ba286895ce64f119fb6ec803b4d6612cdee\": container with ID starting with 56d114194001f22cc8b3753603ef5ba286895ce64f119fb6ec803b4d6612cdee not found: ID does not exist" containerID="56d114194001f22cc8b3753603ef5ba286895ce64f119fb6ec803b4d6612cdee" Jan 29 12:20:52 crc kubenswrapper[4840]: I0129 12:20:52.676178 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56d114194001f22cc8b3753603ef5ba286895ce64f119fb6ec803b4d6612cdee"} err="failed to get container status \"56d114194001f22cc8b3753603ef5ba286895ce64f119fb6ec803b4d6612cdee\": rpc error: code = NotFound desc = could not find container \"56d114194001f22cc8b3753603ef5ba286895ce64f119fb6ec803b4d6612cdee\": container with ID starting with 56d114194001f22cc8b3753603ef5ba286895ce64f119fb6ec803b4d6612cdee not found: ID does not exist" Jan 29 12:20:53 crc kubenswrapper[4840]: I0129 12:20:53.010160 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" path="/var/lib/kubelet/pods/bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16/volumes" Jan 29 12:20:53 crc kubenswrapper[4840]: I0129 12:20:53.523069 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:20:53 crc kubenswrapper[4840]: I0129 12:20:53.523173 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:20:53 crc kubenswrapper[4840]: I0129 12:20:53.523260 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:20:53 crc kubenswrapper[4840]: I0129 12:20:53.524154 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7baefb030f501ff291c20d387cd99f80bc3c294136bd6bcef30704b577ae25fb"} pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:20:53 crc kubenswrapper[4840]: I0129 12:20:53.524258 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" containerID="cri-o://7baefb030f501ff291c20d387cd99f80bc3c294136bd6bcef30704b577ae25fb" gracePeriod=600 Jan 29 12:20:54 crc kubenswrapper[4840]: I0129 12:20:54.594838 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerID="7baefb030f501ff291c20d387cd99f80bc3c294136bd6bcef30704b577ae25fb" exitCode=0 Jan 29 12:20:54 crc kubenswrapper[4840]: I0129 12:20:54.594916 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerDied","Data":"7baefb030f501ff291c20d387cd99f80bc3c294136bd6bcef30704b577ae25fb"} Jan 29 12:20:54 crc kubenswrapper[4840]: I0129 12:20:54.595289 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"a77c6788ba01603f527778b3118f3ab273c4bc5625cfa6fbdc57f992f241d646"} Jan 29 12:20:54 crc kubenswrapper[4840]: I0129 12:20:54.595310 4840 scope.go:117] "RemoveContainer" containerID="5eea126e892ef85191aa623b40e4a1a85f8aad602ec61c374067a4fa5bc21cd1" Jan 29 12:20:54 crc kubenswrapper[4840]: I0129 12:20:54.596771 4840 generic.go:334] "Generic (PLEG): container finished" podID="2733e729-ef23-4660-94da-543b7351c382" containerID="119945074f7cbceb44369a8147a4d3b476fd818c50f30ce902a1a9b3b806da7f" exitCode=0 Jan 29 12:20:54 crc kubenswrapper[4840]: I0129 12:20:54.596805 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrff6" event={"ID":"2733e729-ef23-4660-94da-543b7351c382","Type":"ContainerDied","Data":"119945074f7cbceb44369a8147a4d3b476fd818c50f30ce902a1a9b3b806da7f"} Jan 29 12:20:55 crc kubenswrapper[4840]: I0129 12:20:55.614281 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrff6" event={"ID":"2733e729-ef23-4660-94da-543b7351c382","Type":"ContainerStarted","Data":"0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02"} Jan 29 12:20:55 crc kubenswrapper[4840]: I0129 12:20:55.639248 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xrff6" podStartSLOduration=2.917948795 podStartE2EDuration="5.639214392s" podCreationTimestamp="2026-01-29 12:20:50 +0000 UTC" firstStartedPulling="2026-01-29 12:20:52.581735052 +0000 UTC m=+984.244714955" lastFinishedPulling="2026-01-29 12:20:55.303000659 +0000 UTC m=+986.965980552" observedRunningTime="2026-01-29 12:20:55.632734147 +0000 UTC m=+987.295714050" watchObservedRunningTime="2026-01-29 12:20:55.639214392 +0000 UTC m=+987.302194285" Jan 29 12:21:01 crc kubenswrapper[4840]: I0129 12:21:01.009557 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:21:01 crc kubenswrapper[4840]: I0129 12:21:01.010018 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:21:01 crc kubenswrapper[4840]: I0129 12:21:01.055233 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:21:01 crc kubenswrapper[4840]: I0129 12:21:01.686580 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:21:01 crc kubenswrapper[4840]: I0129 12:21:01.731988 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrff6"] Jan 29 12:21:03 crc kubenswrapper[4840]: I0129 12:21:03.659843 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xrff6" podUID="2733e729-ef23-4660-94da-543b7351c382" containerName="registry-server" containerID="cri-o://0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02" gracePeriod=2 Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.089344 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.257091 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmkdl\" (UniqueName: \"kubernetes.io/projected/2733e729-ef23-4660-94da-543b7351c382-kube-api-access-mmkdl\") pod \"2733e729-ef23-4660-94da-543b7351c382\" (UID: \"2733e729-ef23-4660-94da-543b7351c382\") " Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.257410 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2733e729-ef23-4660-94da-543b7351c382-catalog-content\") pod \"2733e729-ef23-4660-94da-543b7351c382\" (UID: \"2733e729-ef23-4660-94da-543b7351c382\") " Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.257569 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2733e729-ef23-4660-94da-543b7351c382-utilities\") pod \"2733e729-ef23-4660-94da-543b7351c382\" (UID: \"2733e729-ef23-4660-94da-543b7351c382\") " Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.258391 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2733e729-ef23-4660-94da-543b7351c382-utilities" (OuterVolumeSpecName: "utilities") pod "2733e729-ef23-4660-94da-543b7351c382" (UID: "2733e729-ef23-4660-94da-543b7351c382"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.258816 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2733e729-ef23-4660-94da-543b7351c382-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.269084 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2733e729-ef23-4660-94da-543b7351c382-kube-api-access-mmkdl" (OuterVolumeSpecName: "kube-api-access-mmkdl") pod "2733e729-ef23-4660-94da-543b7351c382" (UID: "2733e729-ef23-4660-94da-543b7351c382"). InnerVolumeSpecName "kube-api-access-mmkdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.282090 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2733e729-ef23-4660-94da-543b7351c382-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2733e729-ef23-4660-94da-543b7351c382" (UID: "2733e729-ef23-4660-94da-543b7351c382"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.359999 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmkdl\" (UniqueName: \"kubernetes.io/projected/2733e729-ef23-4660-94da-543b7351c382-kube-api-access-mmkdl\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.360368 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2733e729-ef23-4660-94da-543b7351c382-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.667998 4840 generic.go:334] "Generic (PLEG): container finished" podID="2733e729-ef23-4660-94da-543b7351c382" containerID="0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02" exitCode=0 Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.668050 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrff6" event={"ID":"2733e729-ef23-4660-94da-543b7351c382","Type":"ContainerDied","Data":"0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02"} Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.668065 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xrff6" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.668099 4840 scope.go:117] "RemoveContainer" containerID="0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.668082 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xrff6" event={"ID":"2733e729-ef23-4660-94da-543b7351c382","Type":"ContainerDied","Data":"a3e8d49f0a009826950520a3e72a55177ea4b2e33e12d456788ed377f8574c2a"} Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.691654 4840 scope.go:117] "RemoveContainer" containerID="119945074f7cbceb44369a8147a4d3b476fd818c50f30ce902a1a9b3b806da7f" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.720350 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrff6"] Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.728987 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xrff6"] Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.736439 4840 scope.go:117] "RemoveContainer" containerID="217f3136006eefd6724dc3ce8c11c9eca8f23a4fa0afd84d962330d01b7e0975" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.772607 4840 scope.go:117] "RemoveContainer" containerID="0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02" Jan 29 12:21:04 crc kubenswrapper[4840]: E0129 12:21:04.776282 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02\": container with ID starting with 0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02 not found: ID does not exist" containerID="0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.776332 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02"} err="failed to get container status \"0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02\": rpc error: code = NotFound desc = could not find container \"0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02\": container with ID starting with 0a8545a35c0144da52bf586c1dffc3936f365d9dc410dcbaa9b1361c1d545a02 not found: ID does not exist" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.776361 4840 scope.go:117] "RemoveContainer" containerID="119945074f7cbceb44369a8147a4d3b476fd818c50f30ce902a1a9b3b806da7f" Jan 29 12:21:04 crc kubenswrapper[4840]: E0129 12:21:04.776678 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"119945074f7cbceb44369a8147a4d3b476fd818c50f30ce902a1a9b3b806da7f\": container with ID starting with 119945074f7cbceb44369a8147a4d3b476fd818c50f30ce902a1a9b3b806da7f not found: ID does not exist" containerID="119945074f7cbceb44369a8147a4d3b476fd818c50f30ce902a1a9b3b806da7f" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.776713 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"119945074f7cbceb44369a8147a4d3b476fd818c50f30ce902a1a9b3b806da7f"} err="failed to get container status \"119945074f7cbceb44369a8147a4d3b476fd818c50f30ce902a1a9b3b806da7f\": rpc error: code = NotFound desc = could not find container \"119945074f7cbceb44369a8147a4d3b476fd818c50f30ce902a1a9b3b806da7f\": container with ID starting with 119945074f7cbceb44369a8147a4d3b476fd818c50f30ce902a1a9b3b806da7f not found: ID does not exist" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.776736 4840 scope.go:117] "RemoveContainer" containerID="217f3136006eefd6724dc3ce8c11c9eca8f23a4fa0afd84d962330d01b7e0975" Jan 29 12:21:04 crc kubenswrapper[4840]: E0129 12:21:04.777626 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"217f3136006eefd6724dc3ce8c11c9eca8f23a4fa0afd84d962330d01b7e0975\": container with ID starting with 217f3136006eefd6724dc3ce8c11c9eca8f23a4fa0afd84d962330d01b7e0975 not found: ID does not exist" containerID="217f3136006eefd6724dc3ce8c11c9eca8f23a4fa0afd84d962330d01b7e0975" Jan 29 12:21:04 crc kubenswrapper[4840]: I0129 12:21:04.777650 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"217f3136006eefd6724dc3ce8c11c9eca8f23a4fa0afd84d962330d01b7e0975"} err="failed to get container status \"217f3136006eefd6724dc3ce8c11c9eca8f23a4fa0afd84d962330d01b7e0975\": rpc error: code = NotFound desc = could not find container \"217f3136006eefd6724dc3ce8c11c9eca8f23a4fa0afd84d962330d01b7e0975\": container with ID starting with 217f3136006eefd6724dc3ce8c11c9eca8f23a4fa0afd84d962330d01b7e0975 not found: ID does not exist" Jan 29 12:21:05 crc kubenswrapper[4840]: I0129 12:21:05.012919 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2733e729-ef23-4660-94da-543b7351c382" path="/var/lib/kubelet/pods/2733e729-ef23-4660-94da-543b7351c382/volumes" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.678996 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt"] Jan 29 12:21:12 crc kubenswrapper[4840]: E0129 12:21:12.679869 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" containerName="extract-utilities" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.679884 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" containerName="extract-utilities" Jan 29 12:21:12 crc kubenswrapper[4840]: E0129 12:21:12.679899 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733e729-ef23-4660-94da-543b7351c382" containerName="extract-utilities" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.679906 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733e729-ef23-4660-94da-543b7351c382" containerName="extract-utilities" Jan 29 12:21:12 crc kubenswrapper[4840]: E0129 12:21:12.679914 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733e729-ef23-4660-94da-543b7351c382" containerName="extract-content" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.679921 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733e729-ef23-4660-94da-543b7351c382" containerName="extract-content" Jan 29 12:21:12 crc kubenswrapper[4840]: E0129 12:21:12.679963 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2733e729-ef23-4660-94da-543b7351c382" containerName="registry-server" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.680007 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="2733e729-ef23-4660-94da-543b7351c382" containerName="registry-server" Jan 29 12:21:12 crc kubenswrapper[4840]: E0129 12:21:12.680036 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" containerName="registry-server" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.680046 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" containerName="registry-server" Jan 29 12:21:12 crc kubenswrapper[4840]: E0129 12:21:12.680055 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" containerName="extract-content" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.680063 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" containerName="extract-content" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.680204 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf08d3f4-a913-4be2-9b83-4e3ab6ddcb16" containerName="registry-server" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.680228 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="2733e729-ef23-4660-94da-543b7351c382" containerName="registry-server" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.680834 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.682529 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzv59\" (UniqueName: \"kubernetes.io/projected/7f388a54-98a3-410d-a742-c6e4501b70e0-kube-api-access-wzv59\") pod \"barbican-operator-controller-manager-657667746d-k2jnt\" (UID: \"7f388a54-98a3-410d-a742-c6e4501b70e0\") " pod="openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.684382 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-kb5rk" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.693175 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.694767 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.696870 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-clhwn" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.702148 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.712375 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.736037 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.736917 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.739417 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-4x48s" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.749418 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.750311 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.751744 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-k5qvt" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.757661 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.778023 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.779142 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.784779 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mw5g\" (UniqueName: \"kubernetes.io/projected/66a68829-6bd5-4912-9ee1-532cfd70df6e-kube-api-access-4mw5g\") pod \"heat-operator-controller-manager-5499bccc75-cbdm7\" (UID: \"66a68829-6bd5-4912-9ee1-532cfd70df6e\") " pod="openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.784846 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xll2h\" (UniqueName: \"kubernetes.io/projected/8a01e8b6-1daa-48ba-98e1-1c99f6c4f5e0-kube-api-access-xll2h\") pod \"cinder-operator-controller-manager-7595cf584-blgz5\" (UID: \"8a01e8b6-1daa-48ba-98e1-1c99f6c4f5e0\") " pod="openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.784868 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt7zp\" (UniqueName: \"kubernetes.io/projected/ef7c6b60-290c-4ff4-b3df-b9bebd38cd07-kube-api-access-jt7zp\") pod \"glance-operator-controller-manager-6db5dbd896-rxp7k\" (UID: \"ef7c6b60-290c-4ff4-b3df-b9bebd38cd07\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.784910 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hmw\" (UniqueName: \"kubernetes.io/projected/a2708669-810e-4c1f-8843-eb738dfec7e9-kube-api-access-w4hmw\") pod \"designate-operator-controller-manager-55d5d5f8ff-xkk5v\" (UID: \"a2708669-810e-4c1f-8843-eb738dfec7e9\") " pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.784937 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzv59\" (UniqueName: \"kubernetes.io/projected/7f388a54-98a3-410d-a742-c6e4501b70e0-kube-api-access-wzv59\") pod \"barbican-operator-controller-manager-657667746d-k2jnt\" (UID: \"7f388a54-98a3-410d-a742-c6e4501b70e0\") " pod="openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.785146 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.789787 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-w8ff9" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.800113 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.860262 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzv59\" (UniqueName: \"kubernetes.io/projected/7f388a54-98a3-410d-a742-c6e4501b70e0-kube-api-access-wzv59\") pod \"barbican-operator-controller-manager-657667746d-k2jnt\" (UID: \"7f388a54-98a3-410d-a742-c6e4501b70e0\") " pod="openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.891742 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xll2h\" (UniqueName: \"kubernetes.io/projected/8a01e8b6-1daa-48ba-98e1-1c99f6c4f5e0-kube-api-access-xll2h\") pod \"cinder-operator-controller-manager-7595cf584-blgz5\" (UID: \"8a01e8b6-1daa-48ba-98e1-1c99f6c4f5e0\") " pod="openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.891792 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt7zp\" (UniqueName: \"kubernetes.io/projected/ef7c6b60-290c-4ff4-b3df-b9bebd38cd07-kube-api-access-jt7zp\") pod \"glance-operator-controller-manager-6db5dbd896-rxp7k\" (UID: \"ef7c6b60-290c-4ff4-b3df-b9bebd38cd07\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.891831 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4hmw\" (UniqueName: \"kubernetes.io/projected/a2708669-810e-4c1f-8843-eb738dfec7e9-kube-api-access-w4hmw\") pod \"designate-operator-controller-manager-55d5d5f8ff-xkk5v\" (UID: \"a2708669-810e-4c1f-8843-eb738dfec7e9\") " pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.891902 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mw5g\" (UniqueName: \"kubernetes.io/projected/66a68829-6bd5-4912-9ee1-532cfd70df6e-kube-api-access-4mw5g\") pod \"heat-operator-controller-manager-5499bccc75-cbdm7\" (UID: \"66a68829-6bd5-4912-9ee1-532cfd70df6e\") " pod="openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.901262 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.915166 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.930627 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt7zp\" (UniqueName: \"kubernetes.io/projected/ef7c6b60-290c-4ff4-b3df-b9bebd38cd07-kube-api-access-jt7zp\") pod \"glance-operator-controller-manager-6db5dbd896-rxp7k\" (UID: \"ef7c6b60-290c-4ff4-b3df-b9bebd38cd07\") " pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.931107 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-qdcs2" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.931506 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mw5g\" (UniqueName: \"kubernetes.io/projected/66a68829-6bd5-4912-9ee1-532cfd70df6e-kube-api-access-4mw5g\") pod \"heat-operator-controller-manager-5499bccc75-cbdm7\" (UID: \"66a68829-6bd5-4912-9ee1-532cfd70df6e\") " pod="openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.944600 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xll2h\" (UniqueName: \"kubernetes.io/projected/8a01e8b6-1daa-48ba-98e1-1c99f6c4f5e0-kube-api-access-xll2h\") pod \"cinder-operator-controller-manager-7595cf584-blgz5\" (UID: \"8a01e8b6-1daa-48ba-98e1-1c99f6c4f5e0\") " pod="openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.954867 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4hmw\" (UniqueName: \"kubernetes.io/projected/a2708669-810e-4c1f-8843-eb738dfec7e9-kube-api-access-w4hmw\") pod \"designate-operator-controller-manager-55d5d5f8ff-xkk5v\" (UID: \"a2708669-810e-4c1f-8843-eb738dfec7e9\") " pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.954966 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.960991 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.976566 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.982411 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kh6f9" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.983077 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns"] Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.983877 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.985130 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8xwj4" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.989294 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.994628 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2csvw\" (UniqueName: \"kubernetes.io/projected/7671ead5-a005-40a7-b132-adda6935e9b8-kube-api-access-2csvw\") pod \"ironic-operator-controller-manager-56cb7c4b4c-m46ns\" (UID: \"7671ead5-a005-40a7-b132-adda6935e9b8\") " pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.994662 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmqpd\" (UniqueName: \"kubernetes.io/projected/2dc9b4aa-090d-43e7-b7d0-060d29ae213b-kube-api-access-tmqpd\") pod \"horizon-operator-controller-manager-5fb775575f-q5m62\" (UID: \"2dc9b4aa-090d-43e7-b7d0-060d29ae213b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.994685 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5gmn\" (UniqueName: \"kubernetes.io/projected/d19fb200-1d61-4334-b6aa-7b45c8b79502-kube-api-access-f5gmn\") pod \"infra-operator-controller-manager-79955696d6-wcq6w\" (UID: \"d19fb200-1d61-4334-b6aa-7b45c8b79502\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:12 crc kubenswrapper[4840]: I0129 12:21:12.994762 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert\") pod \"infra-operator-controller-manager-79955696d6-wcq6w\" (UID: \"d19fb200-1d61-4334-b6aa-7b45c8b79502\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.009870 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.014216 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.014867 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.021482 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.022466 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.026588 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-wblcx" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.031170 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.045820 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.048170 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.052320 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.053756 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.060393 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-6zg68" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.060742 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-vmvdc" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.084211 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.084512 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.086636 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.097532 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert\") pod \"infra-operator-controller-manager-79955696d6-wcq6w\" (UID: \"d19fb200-1d61-4334-b6aa-7b45c8b79502\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.097579 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2csvw\" (UniqueName: \"kubernetes.io/projected/7671ead5-a005-40a7-b132-adda6935e9b8-kube-api-access-2csvw\") pod \"ironic-operator-controller-manager-56cb7c4b4c-m46ns\" (UID: \"7671ead5-a005-40a7-b132-adda6935e9b8\") " pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.097623 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmqpd\" (UniqueName: \"kubernetes.io/projected/2dc9b4aa-090d-43e7-b7d0-060d29ae213b-kube-api-access-tmqpd\") pod \"horizon-operator-controller-manager-5fb775575f-q5m62\" (UID: \"2dc9b4aa-090d-43e7-b7d0-060d29ae213b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.097660 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5gmn\" (UniqueName: \"kubernetes.io/projected/d19fb200-1d61-4334-b6aa-7b45c8b79502-kube-api-access-f5gmn\") pod \"infra-operator-controller-manager-79955696d6-wcq6w\" (UID: \"d19fb200-1d61-4334-b6aa-7b45c8b79502\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:13 crc kubenswrapper[4840]: E0129 12:21:13.099330 4840 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 12:21:13 crc kubenswrapper[4840]: E0129 12:21:13.099501 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert podName:d19fb200-1d61-4334-b6aa-7b45c8b79502 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:13.59947787 +0000 UTC m=+1005.262457773 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert") pod "infra-operator-controller-manager-79955696d6-wcq6w" (UID: "d19fb200-1d61-4334-b6aa-7b45c8b79502") : secret "infra-operator-webhook-server-cert" not found Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.115611 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.123587 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.125506 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.127463 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5gmn\" (UniqueName: \"kubernetes.io/projected/d19fb200-1d61-4334-b6aa-7b45c8b79502-kube-api-access-f5gmn\") pod \"infra-operator-controller-manager-79955696d6-wcq6w\" (UID: \"d19fb200-1d61-4334-b6aa-7b45c8b79502\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.129080 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6hpw2" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.133102 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2csvw\" (UniqueName: \"kubernetes.io/projected/7671ead5-a005-40a7-b132-adda6935e9b8-kube-api-access-2csvw\") pod \"ironic-operator-controller-manager-56cb7c4b4c-m46ns\" (UID: \"7671ead5-a005-40a7-b132-adda6935e9b8\") " pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.136759 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.137840 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmqpd\" (UniqueName: \"kubernetes.io/projected/2dc9b4aa-090d-43e7-b7d0-060d29ae213b-kube-api-access-tmqpd\") pod \"horizon-operator-controller-manager-5fb775575f-q5m62\" (UID: \"2dc9b4aa-090d-43e7-b7d0-060d29ae213b\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.145705 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.146864 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.150041 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-fhvl4" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.156164 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.166099 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.168448 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.177250 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.178444 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.180220 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-vt7sp" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.200773 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mv2z\" (UniqueName: \"kubernetes.io/projected/3928ea1f-3fa6-43b2-abe9-1f0554100f8c-kube-api-access-4mv2z\") pod \"mariadb-operator-controller-manager-67bf948998-k7s4n\" (UID: \"3928ea1f-3fa6-43b2-abe9-1f0554100f8c\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.200997 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxl67\" (UniqueName: \"kubernetes.io/projected/6a18af7c-6e6a-495b-88a7-7c44230a5ead-kube-api-access-gxl67\") pod \"manila-operator-controller-manager-6475bdcbc4-tf2sq\" (UID: \"6a18af7c-6e6a-495b-88a7-7c44230a5ead\") " pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.201067 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfvgn\" (UniqueName: \"kubernetes.io/projected/e6ee0757-467e-4f45-a612-0c6db645276b-kube-api-access-rfvgn\") pod \"octavia-operator-controller-manager-6b855b4fc4-mrvgg\" (UID: \"e6ee0757-467e-4f45-a612-0c6db645276b\") " pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.201132 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxj7v\" (UniqueName: \"kubernetes.io/projected/bfedf099-a451-4d2f-b473-4c7756870b55-kube-api-access-sxj7v\") pod \"keystone-operator-controller-manager-77bb7ffb8c-qht99\" (UID: \"bfedf099-a451-4d2f-b473-4c7756870b55\") " pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.215317 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.227092 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.228653 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.235115 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.235688 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pwj75" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.242403 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.243479 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.246264 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-sk5kv" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.289066 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.305053 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfvgn\" (UniqueName: \"kubernetes.io/projected/e6ee0757-467e-4f45-a612-0c6db645276b-kube-api-access-rfvgn\") pod \"octavia-operator-controller-manager-6b855b4fc4-mrvgg\" (UID: \"e6ee0757-467e-4f45-a612-0c6db645276b\") " pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.306111 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxl67\" (UniqueName: \"kubernetes.io/projected/6a18af7c-6e6a-495b-88a7-7c44230a5ead-kube-api-access-gxl67\") pod \"manila-operator-controller-manager-6475bdcbc4-tf2sq\" (UID: \"6a18af7c-6e6a-495b-88a7-7c44230a5ead\") " pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.306256 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxj7v\" (UniqueName: \"kubernetes.io/projected/bfedf099-a451-4d2f-b473-4c7756870b55-kube-api-access-sxj7v\") pod \"keystone-operator-controller-manager-77bb7ffb8c-qht99\" (UID: \"bfedf099-a451-4d2f-b473-4c7756870b55\") " pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.315036 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.316108 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45ntj\" (UniqueName: \"kubernetes.io/projected/06f49550-e51b-4284-b192-ff5829c2bfaf-kube-api-access-45ntj\") pod \"neutron-operator-controller-manager-55df775b69-7lcnz\" (UID: \"06f49550-e51b-4284-b192-ff5829c2bfaf\") " pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.316253 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzkdg\" (UniqueName: \"kubernetes.io/projected/9d94bfa3-811b-4590-b407-220d5f249477-kube-api-access-gzkdg\") pod \"nova-operator-controller-manager-5ccd5b7f8f-c6hc5\" (UID: \"9d94bfa3-811b-4590-b407-220d5f249477\") " pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.316417 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mv2z\" (UniqueName: \"kubernetes.io/projected/3928ea1f-3fa6-43b2-abe9-1f0554100f8c-kube-api-access-4mv2z\") pod \"mariadb-operator-controller-manager-67bf948998-k7s4n\" (UID: \"3928ea1f-3fa6-43b2-abe9-1f0554100f8c\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.318458 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.321741 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.323142 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.316132 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.335064 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-xchb6" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.335094 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.335889 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-lrqvd" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.347170 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.350792 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfvgn\" (UniqueName: \"kubernetes.io/projected/e6ee0757-467e-4f45-a612-0c6db645276b-kube-api-access-rfvgn\") pod \"octavia-operator-controller-manager-6b855b4fc4-mrvgg\" (UID: \"e6ee0757-467e-4f45-a612-0c6db645276b\") " pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.358234 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.365805 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxl67\" (UniqueName: \"kubernetes.io/projected/6a18af7c-6e6a-495b-88a7-7c44230a5ead-kube-api-access-gxl67\") pod \"manila-operator-controller-manager-6475bdcbc4-tf2sq\" (UID: \"6a18af7c-6e6a-495b-88a7-7c44230a5ead\") " pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.375346 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxj7v\" (UniqueName: \"kubernetes.io/projected/bfedf099-a451-4d2f-b473-4c7756870b55-kube-api-access-sxj7v\") pod \"keystone-operator-controller-manager-77bb7ffb8c-qht99\" (UID: \"bfedf099-a451-4d2f-b473-4c7756870b55\") " pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.375955 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.386666 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.389682 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.394592 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mv2z\" (UniqueName: \"kubernetes.io/projected/3928ea1f-3fa6-43b2-abe9-1f0554100f8c-kube-api-access-4mv2z\") pod \"mariadb-operator-controller-manager-67bf948998-k7s4n\" (UID: \"3928ea1f-3fa6-43b2-abe9-1f0554100f8c\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.394617 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-bwzzr" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.414347 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.430430 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg\" (UID: \"dceb14fc-bfee-481f-82dc-c0f60d39f650\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.430499 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tmvq\" (UniqueName: \"kubernetes.io/projected/3ea9bf9d-883b-4e69-9ae6-6354a4004de3-kube-api-access-5tmvq\") pod \"ovn-operator-controller-manager-788c46999f-2gldt\" (UID: \"3ea9bf9d-883b-4e69-9ae6-6354a4004de3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.430528 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b52sh\" (UniqueName: \"kubernetes.io/projected/dceb14fc-bfee-481f-82dc-c0f60d39f650-kube-api-access-b52sh\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg\" (UID: \"dceb14fc-bfee-481f-82dc-c0f60d39f650\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.430553 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7xwt\" (UniqueName: \"kubernetes.io/projected/7484cebd-11a3-4740-9791-dd3949b8dcaa-kube-api-access-g7xwt\") pod \"placement-operator-controller-manager-5b964cf4cd-znb4m\" (UID: \"7484cebd-11a3-4740-9791-dd3949b8dcaa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.430582 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45ntj\" (UniqueName: \"kubernetes.io/projected/06f49550-e51b-4284-b192-ff5829c2bfaf-kube-api-access-45ntj\") pod \"neutron-operator-controller-manager-55df775b69-7lcnz\" (UID: \"06f49550-e51b-4284-b192-ff5829c2bfaf\") " pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.430607 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzkdg\" (UniqueName: \"kubernetes.io/projected/9d94bfa3-811b-4590-b407-220d5f249477-kube-api-access-gzkdg\") pod \"nova-operator-controller-manager-5ccd5b7f8f-c6hc5\" (UID: \"9d94bfa3-811b-4590-b407-220d5f249477\") " pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.430636 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqhgj\" (UniqueName: \"kubernetes.io/projected/7dca7ac4-5015-46ef-aa82-15fea812cca2-kube-api-access-hqhgj\") pod \"telemetry-operator-controller-manager-c95fd9dc5-zjkxq\" (UID: \"7dca7ac4-5015-46ef-aa82-15fea812cca2\") " pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.430660 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmq7r\" (UniqueName: \"kubernetes.io/projected/8c034357-fd7c-4ff7-9b61-5a8491a6b34a-kube-api-access-gmq7r\") pod \"swift-operator-controller-manager-6f7455757b-qgrxt\" (UID: \"8c034357-fd7c-4ff7-9b61-5a8491a6b34a\") " pod="openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.431326 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.436659 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.465249 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45ntj\" (UniqueName: \"kubernetes.io/projected/06f49550-e51b-4284-b192-ff5829c2bfaf-kube-api-access-45ntj\") pod \"neutron-operator-controller-manager-55df775b69-7lcnz\" (UID: \"06f49550-e51b-4284-b192-ff5829c2bfaf\") " pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.470049 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzkdg\" (UniqueName: \"kubernetes.io/projected/9d94bfa3-811b-4590-b407-220d5f249477-kube-api-access-gzkdg\") pod \"nova-operator-controller-manager-5ccd5b7f8f-c6hc5\" (UID: \"9d94bfa3-811b-4590-b407-220d5f249477\") " pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.474484 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.478508 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.479437 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.483982 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-zmfkk" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.486284 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.499735 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.520128 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.526714 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.528657 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.532395 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-w8lj2" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.534631 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqhgj\" (UniqueName: \"kubernetes.io/projected/7dca7ac4-5015-46ef-aa82-15fea812cca2-kube-api-access-hqhgj\") pod \"telemetry-operator-controller-manager-c95fd9dc5-zjkxq\" (UID: \"7dca7ac4-5015-46ef-aa82-15fea812cca2\") " pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.535588 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmq7r\" (UniqueName: \"kubernetes.io/projected/8c034357-fd7c-4ff7-9b61-5a8491a6b34a-kube-api-access-gmq7r\") pod \"swift-operator-controller-manager-6f7455757b-qgrxt\" (UID: \"8c034357-fd7c-4ff7-9b61-5a8491a6b34a\") " pod="openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.535822 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd424\" (UniqueName: \"kubernetes.io/projected/bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5-kube-api-access-fd424\") pod \"test-operator-controller-manager-56f8bfcd9f-ktps6\" (UID: \"bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.536033 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg\" (UID: \"dceb14fc-bfee-481f-82dc-c0f60d39f650\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.536137 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tmvq\" (UniqueName: \"kubernetes.io/projected/3ea9bf9d-883b-4e69-9ae6-6354a4004de3-kube-api-access-5tmvq\") pod \"ovn-operator-controller-manager-788c46999f-2gldt\" (UID: \"3ea9bf9d-883b-4e69-9ae6-6354a4004de3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.536243 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b52sh\" (UniqueName: \"kubernetes.io/projected/dceb14fc-bfee-481f-82dc-c0f60d39f650-kube-api-access-b52sh\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg\" (UID: \"dceb14fc-bfee-481f-82dc-c0f60d39f650\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.536362 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7xwt\" (UniqueName: \"kubernetes.io/projected/7484cebd-11a3-4740-9791-dd3949b8dcaa-kube-api-access-g7xwt\") pod \"placement-operator-controller-manager-5b964cf4cd-znb4m\" (UID: \"7484cebd-11a3-4740-9791-dd3949b8dcaa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" Jan 29 12:21:13 crc kubenswrapper[4840]: E0129 12:21:13.538196 4840 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:21:13 crc kubenswrapper[4840]: E0129 12:21:13.538397 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert podName:dceb14fc-bfee-481f-82dc-c0f60d39f650 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:14.038362167 +0000 UTC m=+1005.701342060 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" (UID: "dceb14fc-bfee-481f-82dc-c0f60d39f650") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.540524 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.554505 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.564718 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tmvq\" (UniqueName: \"kubernetes.io/projected/3ea9bf9d-883b-4e69-9ae6-6354a4004de3-kube-api-access-5tmvq\") pod \"ovn-operator-controller-manager-788c46999f-2gldt\" (UID: \"3ea9bf9d-883b-4e69-9ae6-6354a4004de3\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.565792 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b52sh\" (UniqueName: \"kubernetes.io/projected/dceb14fc-bfee-481f-82dc-c0f60d39f650-kube-api-access-b52sh\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg\" (UID: \"dceb14fc-bfee-481f-82dc-c0f60d39f650\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.566587 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmq7r\" (UniqueName: \"kubernetes.io/projected/8c034357-fd7c-4ff7-9b61-5a8491a6b34a-kube-api-access-gmq7r\") pod \"swift-operator-controller-manager-6f7455757b-qgrxt\" (UID: \"8c034357-fd7c-4ff7-9b61-5a8491a6b34a\") " pod="openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.580582 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.588377 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7xwt\" (UniqueName: \"kubernetes.io/projected/7484cebd-11a3-4740-9791-dd3949b8dcaa-kube-api-access-g7xwt\") pod \"placement-operator-controller-manager-5b964cf4cd-znb4m\" (UID: \"7484cebd-11a3-4740-9791-dd3949b8dcaa\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.588536 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqhgj\" (UniqueName: \"kubernetes.io/projected/7dca7ac4-5015-46ef-aa82-15fea812cca2-kube-api-access-hqhgj\") pod \"telemetry-operator-controller-manager-c95fd9dc5-zjkxq\" (UID: \"7dca7ac4-5015-46ef-aa82-15fea812cca2\") " pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.598288 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.599330 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.604055 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.604336 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.604449 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-778p4" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.610378 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.627715 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.629016 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.631707 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-4zqvr" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.633580 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4"] Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.634442 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.638785 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.638981 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.639749 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd424\" (UniqueName: \"kubernetes.io/projected/bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5-kube-api-access-fd424\") pod \"test-operator-controller-manager-56f8bfcd9f-ktps6\" (UID: \"bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.639894 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert\") pod \"infra-operator-controller-manager-79955696d6-wcq6w\" (UID: \"d19fb200-1d61-4334-b6aa-7b45c8b79502\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:13 crc kubenswrapper[4840]: E0129 12:21:13.640386 4840 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 12:21:13 crc kubenswrapper[4840]: E0129 12:21:13.640456 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert podName:d19fb200-1d61-4334-b6aa-7b45c8b79502 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:14.640430336 +0000 UTC m=+1006.303410229 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert") pod "infra-operator-controller-manager-79955696d6-wcq6w" (UID: "d19fb200-1d61-4334-b6aa-7b45c8b79502") : secret "infra-operator-webhook-server-cert" not found Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.640992 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pjwv\" (UniqueName: \"kubernetes.io/projected/882d0d06-c72a-46d5-985b-578094eedc4c-kube-api-access-9pjwv\") pod \"watcher-operator-controller-manager-56b5dc77fd-7lx5t\" (UID: \"882d0d06-c72a-46d5-985b-578094eedc4c\") " pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.641106 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8h9\" (UniqueName: \"kubernetes.io/projected/201f6c04-44f6-4d28-bb64-b6f99c322f55-kube-api-access-gl8h9\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.673237 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd424\" (UniqueName: \"kubernetes.io/projected/bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5-kube-api-access-fd424\") pod \"test-operator-controller-manager-56f8bfcd9f-ktps6\" (UID: \"bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.691898 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.726377 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.742698 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pjwv\" (UniqueName: \"kubernetes.io/projected/882d0d06-c72a-46d5-985b-578094eedc4c-kube-api-access-9pjwv\") pod \"watcher-operator-controller-manager-56b5dc77fd-7lx5t\" (UID: \"882d0d06-c72a-46d5-985b-578094eedc4c\") " pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.742957 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl8h9\" (UniqueName: \"kubernetes.io/projected/201f6c04-44f6-4d28-bb64-b6f99c322f55-kube-api-access-gl8h9\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.743066 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd2xb\" (UniqueName: \"kubernetes.io/projected/bfc51868-5269-40cb-b17f-054309802b44-kube-api-access-nd2xb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lm2m4\" (UID: \"bfc51868-5269-40cb-b17f-054309802b44\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.743176 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.743276 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:13 crc kubenswrapper[4840]: E0129 12:21:13.743564 4840 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 12:21:13 crc kubenswrapper[4840]: E0129 12:21:13.743652 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs podName:201f6c04-44f6-4d28-bb64-b6f99c322f55 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:14.243626775 +0000 UTC m=+1005.906606668 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs") pod "openstack-operator-controller-manager-65dc8f5954-5vmnj" (UID: "201f6c04-44f6-4d28-bb64-b6f99c322f55") : secret "webhook-server-cert" not found Jan 29 12:21:13 crc kubenswrapper[4840]: E0129 12:21:13.743833 4840 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 12:21:13 crc kubenswrapper[4840]: E0129 12:21:13.743987 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs podName:201f6c04-44f6-4d28-bb64-b6f99c322f55 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:14.243958783 +0000 UTC m=+1005.906938676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs") pod "openstack-operator-controller-manager-65dc8f5954-5vmnj" (UID: "201f6c04-44f6-4d28-bb64-b6f99c322f55") : secret "metrics-server-cert" not found Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.761373 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl8h9\" (UniqueName: \"kubernetes.io/projected/201f6c04-44f6-4d28-bb64-b6f99c322f55-kube-api-access-gl8h9\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.762551 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pjwv\" (UniqueName: \"kubernetes.io/projected/882d0d06-c72a-46d5-985b-578094eedc4c-kube-api-access-9pjwv\") pod \"watcher-operator-controller-manager-56b5dc77fd-7lx5t\" (UID: \"882d0d06-c72a-46d5-985b-578094eedc4c\") " pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.826780 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.845528 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd2xb\" (UniqueName: \"kubernetes.io/projected/bfc51868-5269-40cb-b17f-054309802b44-kube-api-access-nd2xb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lm2m4\" (UID: \"bfc51868-5269-40cb-b17f-054309802b44\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.867735 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd2xb\" (UniqueName: \"kubernetes.io/projected/bfc51868-5269-40cb-b17f-054309802b44-kube-api-access-nd2xb\") pod \"rabbitmq-cluster-operator-manager-668c99d594-lm2m4\" (UID: \"bfc51868-5269-40cb-b17f-054309802b44\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4" Jan 29 12:21:13 crc kubenswrapper[4840]: I0129 12:21:13.969438 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.050453 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg\" (UID: \"dceb14fc-bfee-481f-82dc-c0f60d39f650\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:14 crc kubenswrapper[4840]: E0129 12:21:14.051670 4840 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:21:14 crc kubenswrapper[4840]: E0129 12:21:14.051797 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert podName:dceb14fc-bfee-481f-82dc-c0f60d39f650 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:15.051764742 +0000 UTC m=+1006.714744635 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" (UID: "dceb14fc-bfee-481f-82dc-c0f60d39f650") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.115072 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4" Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.256543 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.256604 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:14 crc kubenswrapper[4840]: E0129 12:21:14.256784 4840 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 12:21:14 crc kubenswrapper[4840]: E0129 12:21:14.256882 4840 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 12:21:14 crc kubenswrapper[4840]: E0129 12:21:14.256897 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs podName:201f6c04-44f6-4d28-bb64-b6f99c322f55 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:15.256872736 +0000 UTC m=+1006.919852809 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs") pod "openstack-operator-controller-manager-65dc8f5954-5vmnj" (UID: "201f6c04-44f6-4d28-bb64-b6f99c322f55") : secret "webhook-server-cert" not found Jan 29 12:21:14 crc kubenswrapper[4840]: E0129 12:21:14.257025 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs podName:201f6c04-44f6-4d28-bb64-b6f99c322f55 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:15.256993169 +0000 UTC m=+1006.919973272 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs") pod "openstack-operator-controller-manager-65dc8f5954-5vmnj" (UID: "201f6c04-44f6-4d28-bb64-b6f99c322f55") : secret "metrics-server-cert" not found Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.361141 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5"] Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.453436 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt"] Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.466656 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v"] Jan 29 12:21:14 crc kubenswrapper[4840]: W0129 12:21:14.466701 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7f388a54_98a3_410d_a742_c6e4501b70e0.slice/crio-78534a48f77eca2a3c43b27620f73cce1ddceed27a7ebacf74a865205644231f WatchSource:0}: Error finding container 78534a48f77eca2a3c43b27620f73cce1ddceed27a7ebacf74a865205644231f: Status 404 returned error can't find the container with id 78534a48f77eca2a3c43b27620f73cce1ddceed27a7ebacf74a865205644231f Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.661800 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert\") pod \"infra-operator-controller-manager-79955696d6-wcq6w\" (UID: \"d19fb200-1d61-4334-b6aa-7b45c8b79502\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:14 crc kubenswrapper[4840]: E0129 12:21:14.662011 4840 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 12:21:14 crc kubenswrapper[4840]: E0129 12:21:14.662075 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert podName:d19fb200-1d61-4334-b6aa-7b45c8b79502 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:16.662055116 +0000 UTC m=+1008.325035009 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert") pod "infra-operator-controller-manager-79955696d6-wcq6w" (UID: "d19fb200-1d61-4334-b6aa-7b45c8b79502") : secret "infra-operator-webhook-server-cert" not found Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.758234 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5" event={"ID":"8a01e8b6-1daa-48ba-98e1-1c99f6c4f5e0","Type":"ContainerStarted","Data":"db6092b820fa404c9b6e2af4b4408e42b022f6dfb21669b2209855cdbe20eca6"} Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.765306 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v" event={"ID":"a2708669-810e-4c1f-8843-eb738dfec7e9","Type":"ContainerStarted","Data":"d06fbc7261a742a066e77b80084b7cd7fc84eb1139a022017c39791db3c6f38b"} Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.766917 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt" event={"ID":"7f388a54-98a3-410d-a742-c6e4501b70e0","Type":"ContainerStarted","Data":"78534a48f77eca2a3c43b27620f73cce1ddceed27a7ebacf74a865205644231f"} Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.826267 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt"] Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.836538 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq"] Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.865331 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62"] Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.880906 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg"] Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.887306 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k"] Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.914713 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n"] Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.937742 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99"] Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.949315 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5"] Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.961686 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7"] Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.975961 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz"] Jan 29 12:21:14 crc kubenswrapper[4840]: W0129 12:21:14.982049 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66a68829_6bd5_4912_9ee1_532cfd70df6e.slice/crio-670def5ff1d2dfee90490c9508e62ccd4a96ec26ed6c5b760270a79bdad2a200 WatchSource:0}: Error finding container 670def5ff1d2dfee90490c9508e62ccd4a96ec26ed6c5b760270a79bdad2a200: Status 404 returned error can't find the container with id 670def5ff1d2dfee90490c9508e62ccd4a96ec26ed6c5b760270a79bdad2a200 Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.989734 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns"] Jan 29 12:21:14 crc kubenswrapper[4840]: I0129 12:21:14.995499 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6"] Jan 29 12:21:15 crc kubenswrapper[4840]: W0129 12:21:15.017779 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3928ea1f_3fa6_43b2_abe9_1f0554100f8c.slice/crio-76ed7170d55f9f2ef49aedc57ecfa020e18d4e477756fc121cc507caf93d741f WatchSource:0}: Error finding container 76ed7170d55f9f2ef49aedc57ecfa020e18d4e477756fc121cc507caf93d741f: Status 404 returned error can't find the container with id 76ed7170d55f9f2ef49aedc57ecfa020e18d4e477756fc121cc507caf93d741f Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.053088 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt"] Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.057620 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/ironic-operator@sha256:258f62f07a81276471e70a7f56776c57c7c928ecfe99c09b9b88b735c658d515,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2csvw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-56cb7c4b4c-m46ns_openstack-operators(7671ead5-a005-40a7-b132-adda6935e9b8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.060456 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4"] Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.061043 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq"] Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.061059 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m"] Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.061587 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fd424,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-ktps6_openstack-operators(bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.061642 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" podUID="7671ead5-a005-40a7-b132-adda6935e9b8" Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.063630 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" podUID="bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5" Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.064036 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t"] Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.067384 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nd2xb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-lm2m4_openstack-operators(bfc51868-5269-40cb-b17f-054309802b44): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.067645 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg\" (UID: \"dceb14fc-bfee-481f-82dc-c0f60d39f650\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.067774 4840 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.067836 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert podName:dceb14fc-bfee-481f-82dc-c0f60d39f650 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:17.067816312 +0000 UTC m=+1008.730796205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" (UID: "dceb14fc-bfee-481f-82dc-c0f60d39f650") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.068909 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4" podUID="bfc51868-5269-40cb-b17f-054309802b44" Jan 29 12:21:15 crc kubenswrapper[4840]: W0129 12:21:15.069842 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dca7ac4_5015_46ef_aa82_15fea812cca2.slice/crio-0c4c5fce1812f7c9237573d0ba43014bd05cf8e9b87d599b8fd78c08933ed43f WatchSource:0}: Error finding container 0c4c5fce1812f7c9237573d0ba43014bd05cf8e9b87d599b8fd78c08933ed43f: Status 404 returned error can't find the container with id 0c4c5fce1812f7c9237573d0ba43014bd05cf8e9b87d599b8fd78c08933ed43f Jan 29 12:21:15 crc kubenswrapper[4840]: W0129 12:21:15.072350 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06f49550_e51b_4284_b192_ff5829c2bfaf.slice/crio-c3ae9463f536f3516605b48f7dfd004afe2df636c7b37756a8ed1eb3748474e5 WatchSource:0}: Error finding container c3ae9463f536f3516605b48f7dfd004afe2df636c7b37756a8ed1eb3748474e5: Status 404 returned error can't find the container with id c3ae9463f536f3516605b48f7dfd004afe2df636c7b37756a8ed1eb3748474e5 Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.075522 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/telemetry-operator@sha256:3e0713a6e9097420ebc622a70d44fcc5e5e9b9e036babe2966afd7c6fb5dc40b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hqhgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-c95fd9dc5-zjkxq_openstack-operators(7dca7ac4-5015-46ef-aa82-15fea812cca2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.077070 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/neutron-operator@sha256:dd0753a83370d8c2e89107b68602779c3c975e1ea9cf3b20a7d262897ddb9b9b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-45ntj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-55df775b69-7lcnz_openstack-operators(06f49550-e51b-4284-b192-ff5829c2bfaf): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.077155 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" podUID="7dca7ac4-5015-46ef-aa82-15fea812cca2" Jan 29 12:21:15 crc kubenswrapper[4840]: W0129 12:21:15.078149 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7484cebd_11a3_4740_9791_dd3949b8dcaa.slice/crio-94b1feb967f662e3d2efb9f99247f7323f8b40177cad9bebd204b53725706dea WatchSource:0}: Error finding container 94b1feb967f662e3d2efb9f99247f7323f8b40177cad9bebd204b53725706dea: Status 404 returned error can't find the container with id 94b1feb967f662e3d2efb9f99247f7323f8b40177cad9bebd204b53725706dea Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.078244 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" podUID="06f49550-e51b-4284-b192-ff5829c2bfaf" Jan 29 12:21:15 crc kubenswrapper[4840]: W0129 12:21:15.079349 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod882d0d06_c72a_46d5_985b_578094eedc4c.slice/crio-e73480c59165aef7fa5c6027fbb89669794181e2e3ad5cc16fa2f7b769234783 WatchSource:0}: Error finding container e73480c59165aef7fa5c6027fbb89669794181e2e3ad5cc16fa2f7b769234783: Status 404 returned error can't find the container with id e73480c59165aef7fa5c6027fbb89669794181e2e3ad5cc16fa2f7b769234783 Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.081009 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g7xwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-znb4m_openstack-operators(7484cebd-11a3-4740-9791-dd3949b8dcaa): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.081604 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/watcher-operator@sha256:99cd507ce61fb7c28e3cd9843ef5764e698069f050048eec8abd4afa24bfc1ea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9pjwv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-56b5dc77fd-7lx5t_openstack-operators(882d0d06-c72a-46d5-985b-578094eedc4c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.082093 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" podUID="7484cebd-11a3-4740-9791-dd3949b8dcaa" Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.083660 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" podUID="882d0d06-c72a-46d5-985b-578094eedc4c" Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.281984 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.282046 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.282234 4840 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.282288 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs podName:201f6c04-44f6-4d28-bb64-b6f99c322f55 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:17.282271826 +0000 UTC m=+1008.945251719 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs") pod "openstack-operator-controller-manager-65dc8f5954-5vmnj" (UID: "201f6c04-44f6-4d28-bb64-b6f99c322f55") : secret "metrics-server-cert" not found Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.282655 4840 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.282685 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs podName:201f6c04-44f6-4d28-bb64-b6f99c322f55 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:17.282674957 +0000 UTC m=+1008.945654850 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs") pod "openstack-operator-controller-manager-65dc8f5954-5vmnj" (UID: "201f6c04-44f6-4d28-bb64-b6f99c322f55") : secret "webhook-server-cert" not found Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.798735 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" event={"ID":"7484cebd-11a3-4740-9791-dd3949b8dcaa","Type":"ContainerStarted","Data":"94b1feb967f662e3d2efb9f99247f7323f8b40177cad9bebd204b53725706dea"} Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.803606 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" podUID="7484cebd-11a3-4740-9791-dd3949b8dcaa" Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.803864 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n" event={"ID":"3928ea1f-3fa6-43b2-abe9-1f0554100f8c","Type":"ContainerStarted","Data":"76ed7170d55f9f2ef49aedc57ecfa020e18d4e477756fc121cc507caf93d741f"} Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.816292 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7" event={"ID":"66a68829-6bd5-4912-9ee1-532cfd70df6e","Type":"ContainerStarted","Data":"670def5ff1d2dfee90490c9508e62ccd4a96ec26ed6c5b760270a79bdad2a200"} Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.821588 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62" event={"ID":"2dc9b4aa-090d-43e7-b7d0-060d29ae213b","Type":"ContainerStarted","Data":"ee0fe9ffab26edda0310cc4db015bbe55b842d95c8626b1acb3c2bd4217cc997"} Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.823410 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" event={"ID":"882d0d06-c72a-46d5-985b-578094eedc4c","Type":"ContainerStarted","Data":"e73480c59165aef7fa5c6027fbb89669794181e2e3ad5cc16fa2f7b769234783"} Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.834068 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" event={"ID":"06f49550-e51b-4284-b192-ff5829c2bfaf","Type":"ContainerStarted","Data":"c3ae9463f536f3516605b48f7dfd004afe2df636c7b37756a8ed1eb3748474e5"} Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.842054 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:dd0753a83370d8c2e89107b68602779c3c975e1ea9cf3b20a7d262897ddb9b9b\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" podUID="06f49550-e51b-4284-b192-ff5829c2bfaf" Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.843308 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4" event={"ID":"bfc51868-5269-40cb-b17f-054309802b44","Type":"ContainerStarted","Data":"fc4c2e4940d208cee30140ccf6fb1ea81d564394b0c05b82c8d099e5239fcb61"} Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.844931 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4" podUID="bfc51868-5269-40cb-b17f-054309802b44" Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.846193 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt" event={"ID":"3ea9bf9d-883b-4e69-9ae6-6354a4004de3","Type":"ContainerStarted","Data":"0d691e512b0f20498ded32ce086f9ba6664afcd9f464ff1fe754c5ec03886eb6"} Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.848115 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5" event={"ID":"9d94bfa3-811b-4590-b407-220d5f249477","Type":"ContainerStarted","Data":"26e9dd41a657e199a880386403edeb40addd0d71e962daf8fb892747fad7c3a6"} Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.850713 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt" event={"ID":"8c034357-fd7c-4ff7-9b61-5a8491a6b34a","Type":"ContainerStarted","Data":"a2ddba798b0dc7f30fe034b0e6bcec59ba1ae8c77ef5c0ef9f81d7047cfacf41"} Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.851806 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99" event={"ID":"bfedf099-a451-4d2f-b473-4c7756870b55","Type":"ContainerStarted","Data":"e300c1a24539392d000a0d6e43606825843e2d765fd24c7f637c72a628e82213"} Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.853412 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:99cd507ce61fb7c28e3cd9843ef5764e698069f050048eec8abd4afa24bfc1ea\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" podUID="882d0d06-c72a-46d5-985b-578094eedc4c" Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.854263 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" event={"ID":"7dca7ac4-5015-46ef-aa82-15fea812cca2","Type":"ContainerStarted","Data":"0c4c5fce1812f7c9237573d0ba43014bd05cf8e9b87d599b8fd78c08933ed43f"} Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.855848 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:3e0713a6e9097420ebc622a70d44fcc5e5e9b9e036babe2966afd7c6fb5dc40b\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" podUID="7dca7ac4-5015-46ef-aa82-15fea812cca2" Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.856573 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" event={"ID":"7671ead5-a005-40a7-b132-adda6935e9b8","Type":"ContainerStarted","Data":"68b9802c878f1714b90a9aa1a08ad5f6f0c10b4b79b935172aa9b2eda5d6eb74"} Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.857893 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:258f62f07a81276471e70a7f56776c57c7c928ecfe99c09b9b88b735c658d515\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" podUID="7671ead5-a005-40a7-b132-adda6935e9b8" Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.858177 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k" event={"ID":"ef7c6b60-290c-4ff4-b3df-b9bebd38cd07","Type":"ContainerStarted","Data":"6522688688a6e1ea8f0a3fdba33ea6a545571f564ea51eade43ea2f519fabf6d"} Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.861590 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq" event={"ID":"6a18af7c-6e6a-495b-88a7-7c44230a5ead","Type":"ContainerStarted","Data":"037ad318ebc6e080b82f734eea5d9c63aaec9fbf731fef8bda7dae4ad4d9545f"} Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.869175 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" event={"ID":"bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5","Type":"ContainerStarted","Data":"227fe3798885ecbf90029ea88b87dfa73161bc34de7e4e159de3242fc01d5e88"} Jan 29 12:21:15 crc kubenswrapper[4840]: E0129 12:21:15.871596 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" podUID="bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5" Jan 29 12:21:15 crc kubenswrapper[4840]: I0129 12:21:15.872344 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg" event={"ID":"e6ee0757-467e-4f45-a612-0c6db645276b","Type":"ContainerStarted","Data":"beea7ca308128f533fc7e416872d85bca2c6cc802930f805c240bc7cf4deea46"} Jan 29 12:21:16 crc kubenswrapper[4840]: I0129 12:21:16.723755 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert\") pod \"infra-operator-controller-manager-79955696d6-wcq6w\" (UID: \"d19fb200-1d61-4334-b6aa-7b45c8b79502\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:16 crc kubenswrapper[4840]: E0129 12:21:16.723969 4840 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 12:21:16 crc kubenswrapper[4840]: E0129 12:21:16.724036 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert podName:d19fb200-1d61-4334-b6aa-7b45c8b79502 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:20.724015569 +0000 UTC m=+1012.386995462 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert") pod "infra-operator-controller-manager-79955696d6-wcq6w" (UID: "d19fb200-1d61-4334-b6aa-7b45c8b79502") : secret "infra-operator-webhook-server-cert" not found Jan 29 12:21:16 crc kubenswrapper[4840]: E0129 12:21:16.886494 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" podUID="7484cebd-11a3-4740-9791-dd3949b8dcaa" Jan 29 12:21:16 crc kubenswrapper[4840]: E0129 12:21:16.886730 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4" podUID="bfc51868-5269-40cb-b17f-054309802b44" Jan 29 12:21:16 crc kubenswrapper[4840]: E0129 12:21:16.886781 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/telemetry-operator@sha256:3e0713a6e9097420ebc622a70d44fcc5e5e9b9e036babe2966afd7c6fb5dc40b\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" podUID="7dca7ac4-5015-46ef-aa82-15fea812cca2" Jan 29 12:21:16 crc kubenswrapper[4840]: E0129 12:21:16.886833 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/neutron-operator@sha256:dd0753a83370d8c2e89107b68602779c3c975e1ea9cf3b20a7d262897ddb9b9b\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" podUID="06f49550-e51b-4284-b192-ff5829c2bfaf" Jan 29 12:21:16 crc kubenswrapper[4840]: E0129 12:21:16.886852 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/ironic-operator@sha256:258f62f07a81276471e70a7f56776c57c7c928ecfe99c09b9b88b735c658d515\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" podUID="7671ead5-a005-40a7-b132-adda6935e9b8" Jan 29 12:21:16 crc kubenswrapper[4840]: E0129 12:21:16.890628 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" podUID="bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5" Jan 29 12:21:16 crc kubenswrapper[4840]: E0129 12:21:16.890926 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/watcher-operator@sha256:99cd507ce61fb7c28e3cd9843ef5764e698069f050048eec8abd4afa24bfc1ea\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" podUID="882d0d06-c72a-46d5-985b-578094eedc4c" Jan 29 12:21:17 crc kubenswrapper[4840]: I0129 12:21:17.140419 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg\" (UID: \"dceb14fc-bfee-481f-82dc-c0f60d39f650\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:17 crc kubenswrapper[4840]: E0129 12:21:17.140706 4840 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:21:17 crc kubenswrapper[4840]: E0129 12:21:17.140856 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert podName:dceb14fc-bfee-481f-82dc-c0f60d39f650 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:21.140818522 +0000 UTC m=+1012.803798415 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" (UID: "dceb14fc-bfee-481f-82dc-c0f60d39f650") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:21:17 crc kubenswrapper[4840]: I0129 12:21:17.345013 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:17 crc kubenswrapper[4840]: I0129 12:21:17.345097 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:17 crc kubenswrapper[4840]: E0129 12:21:17.345369 4840 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 12:21:17 crc kubenswrapper[4840]: E0129 12:21:17.345445 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs podName:201f6c04-44f6-4d28-bb64-b6f99c322f55 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:21.345423753 +0000 UTC m=+1013.008403646 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs") pod "openstack-operator-controller-manager-65dc8f5954-5vmnj" (UID: "201f6c04-44f6-4d28-bb64-b6f99c322f55") : secret "metrics-server-cert" not found Jan 29 12:21:17 crc kubenswrapper[4840]: E0129 12:21:17.345927 4840 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 12:21:17 crc kubenswrapper[4840]: E0129 12:21:17.345986 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs podName:201f6c04-44f6-4d28-bb64-b6f99c322f55 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:21.345978177 +0000 UTC m=+1013.008958070 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs") pod "openstack-operator-controller-manager-65dc8f5954-5vmnj" (UID: "201f6c04-44f6-4d28-bb64-b6f99c322f55") : secret "webhook-server-cert" not found Jan 29 12:21:20 crc kubenswrapper[4840]: I0129 12:21:20.795225 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert\") pod \"infra-operator-controller-manager-79955696d6-wcq6w\" (UID: \"d19fb200-1d61-4334-b6aa-7b45c8b79502\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:20 crc kubenswrapper[4840]: E0129 12:21:20.795402 4840 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 29 12:21:20 crc kubenswrapper[4840]: E0129 12:21:20.796005 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert podName:d19fb200-1d61-4334-b6aa-7b45c8b79502 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:28.795984546 +0000 UTC m=+1020.458964459 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert") pod "infra-operator-controller-manager-79955696d6-wcq6w" (UID: "d19fb200-1d61-4334-b6aa-7b45c8b79502") : secret "infra-operator-webhook-server-cert" not found Jan 29 12:21:21 crc kubenswrapper[4840]: I0129 12:21:21.200559 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg\" (UID: \"dceb14fc-bfee-481f-82dc-c0f60d39f650\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:21 crc kubenswrapper[4840]: E0129 12:21:21.200697 4840 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:21:21 crc kubenswrapper[4840]: E0129 12:21:21.200770 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert podName:dceb14fc-bfee-481f-82dc-c0f60d39f650 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:29.200750815 +0000 UTC m=+1020.863730708 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" (UID: "dceb14fc-bfee-481f-82dc-c0f60d39f650") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 29 12:21:21 crc kubenswrapper[4840]: I0129 12:21:21.402556 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:21 crc kubenswrapper[4840]: I0129 12:21:21.402629 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:21 crc kubenswrapper[4840]: E0129 12:21:21.402794 4840 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 29 12:21:21 crc kubenswrapper[4840]: E0129 12:21:21.402872 4840 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 29 12:21:21 crc kubenswrapper[4840]: E0129 12:21:21.402922 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs podName:201f6c04-44f6-4d28-bb64-b6f99c322f55 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:29.402893068 +0000 UTC m=+1021.065872961 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs") pod "openstack-operator-controller-manager-65dc8f5954-5vmnj" (UID: "201f6c04-44f6-4d28-bb64-b6f99c322f55") : secret "webhook-server-cert" not found Jan 29 12:21:21 crc kubenswrapper[4840]: E0129 12:21:21.402984 4840 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs podName:201f6c04-44f6-4d28-bb64-b6f99c322f55 nodeName:}" failed. No retries permitted until 2026-01-29 12:21:29.402957909 +0000 UTC m=+1021.065937802 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs") pod "openstack-operator-controller-manager-65dc8f5954-5vmnj" (UID: "201f6c04-44f6-4d28-bb64-b6f99c322f55") : secret "metrics-server-cert" not found Jan 29 12:21:28 crc kubenswrapper[4840]: I0129 12:21:28.829242 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert\") pod \"infra-operator-controller-manager-79955696d6-wcq6w\" (UID: \"d19fb200-1d61-4334-b6aa-7b45c8b79502\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:28 crc kubenswrapper[4840]: I0129 12:21:28.842472 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d19fb200-1d61-4334-b6aa-7b45c8b79502-cert\") pod \"infra-operator-controller-manager-79955696d6-wcq6w\" (UID: \"d19fb200-1d61-4334-b6aa-7b45c8b79502\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:28 crc kubenswrapper[4840]: I0129 12:21:28.956584 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-kh6f9" Jan 29 12:21:28 crc kubenswrapper[4840]: I0129 12:21:28.966041 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:29 crc kubenswrapper[4840]: I0129 12:21:29.236305 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg\" (UID: \"dceb14fc-bfee-481f-82dc-c0f60d39f650\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:29 crc kubenswrapper[4840]: I0129 12:21:29.249794 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dceb14fc-bfee-481f-82dc-c0f60d39f650-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg\" (UID: \"dceb14fc-bfee-481f-82dc-c0f60d39f650\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:29 crc kubenswrapper[4840]: I0129 12:21:29.438782 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:29 crc kubenswrapper[4840]: I0129 12:21:29.438852 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:29 crc kubenswrapper[4840]: I0129 12:21:29.442294 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-metrics-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:29 crc kubenswrapper[4840]: I0129 12:21:29.442498 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/201f6c04-44f6-4d28-bb64-b6f99c322f55-webhook-certs\") pod \"openstack-operator-controller-manager-65dc8f5954-5vmnj\" (UID: \"201f6c04-44f6-4d28-bb64-b6f99c322f55\") " pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:29 crc kubenswrapper[4840]: I0129 12:21:29.462854 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pwj75" Jan 29 12:21:29 crc kubenswrapper[4840]: I0129 12:21:29.471426 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:29 crc kubenswrapper[4840]: I0129 12:21:29.663070 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-778p4" Jan 29 12:21:29 crc kubenswrapper[4840]: I0129 12:21:29.671301 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:30 crc kubenswrapper[4840]: E0129 12:21:30.455435 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 29 12:21:30 crc kubenswrapper[4840]: E0129 12:21:30.456043 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tmqpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-q5m62_openstack-operators(2dc9b4aa-090d-43e7-b7d0-060d29ae213b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:21:30 crc kubenswrapper[4840]: E0129 12:21:30.457256 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62" podUID="2dc9b4aa-090d-43e7-b7d0-060d29ae213b" Jan 29 12:21:30 crc kubenswrapper[4840]: E0129 12:21:30.995039 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62" podUID="2dc9b4aa-090d-43e7-b7d0-060d29ae213b" Jan 29 12:21:31 crc kubenswrapper[4840]: E0129 12:21:31.033037 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8" Jan 29 12:21:31 crc kubenswrapper[4840]: E0129 12:21:31.033231 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gmq7r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-6f7455757b-qgrxt_openstack-operators(8c034357-fd7c-4ff7-9b61-5a8491a6b34a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:21:31 crc kubenswrapper[4840]: E0129 12:21:31.036183 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt" podUID="8c034357-fd7c-4ff7-9b61-5a8491a6b34a" Jan 29 12:21:32 crc kubenswrapper[4840]: E0129 12:21:32.001898 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/swift-operator@sha256:4dfb3cd42806f7989d962e2346a58c6358e70cf95c41b4890e26cb5219805ac8\\\"\"" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt" podUID="8c034357-fd7c-4ff7-9b61-5a8491a6b34a" Jan 29 12:21:33 crc kubenswrapper[4840]: E0129 12:21:33.115384 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/heat-operator@sha256:974ede525fb74cb22e044a6b64928afd19c510bc0e7dc2b4b9db849d212c2282" Jan 29 12:21:33 crc kubenswrapper[4840]: E0129 12:21:33.116840 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/heat-operator@sha256:974ede525fb74cb22e044a6b64928afd19c510bc0e7dc2b4b9db849d212c2282,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4mw5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5499bccc75-cbdm7_openstack-operators(66a68829-6bd5-4912-9ee1-532cfd70df6e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:21:33 crc kubenswrapper[4840]: E0129 12:21:33.118724 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7" podUID="66a68829-6bd5-4912-9ee1-532cfd70df6e" Jan 29 12:21:33 crc kubenswrapper[4840]: E0129 12:21:33.855629 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/keystone-operator@sha256:acd5f9f02ae459f594ae752100606a9dab02d4ff4ba0d8b24534129e79b34534" Jan 29 12:21:33 crc kubenswrapper[4840]: E0129 12:21:33.856021 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/keystone-operator@sha256:acd5f9f02ae459f594ae752100606a9dab02d4ff4ba0d8b24534129e79b34534,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sxj7v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-77bb7ffb8c-qht99_openstack-operators(bfedf099-a451-4d2f-b473-4c7756870b55): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:21:33 crc kubenswrapper[4840]: E0129 12:21:33.857315 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99" podUID="bfedf099-a451-4d2f-b473-4c7756870b55" Jan 29 12:21:34 crc kubenswrapper[4840]: E0129 12:21:34.081892 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/heat-operator@sha256:974ede525fb74cb22e044a6b64928afd19c510bc0e7dc2b4b9db849d212c2282\\\"\"" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7" podUID="66a68829-6bd5-4912-9ee1-532cfd70df6e" Jan 29 12:21:34 crc kubenswrapper[4840]: E0129 12:21:34.083455 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/keystone-operator@sha256:acd5f9f02ae459f594ae752100606a9dab02d4ff4ba0d8b24534129e79b34534\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99" podUID="bfedf099-a451-4d2f-b473-4c7756870b55" Jan 29 12:21:34 crc kubenswrapper[4840]: E0129 12:21:34.574066 4840 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/lmiccini/nova-operator@sha256:5a7235a96194f43fbbbee4085b28e1749733862ce801ef67413f496a1e5826bc" Jan 29 12:21:34 crc kubenswrapper[4840]: E0129 12:21:34.574277 4840 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/lmiccini/nova-operator@sha256:5a7235a96194f43fbbbee4085b28e1749733862ce801ef67413f496a1e5826bc,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gzkdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-5ccd5b7f8f-c6hc5_openstack-operators(9d94bfa3-811b-4590-b407-220d5f249477): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 29 12:21:34 crc kubenswrapper[4840]: E0129 12:21:34.575439 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5" podUID="9d94bfa3-811b-4590-b407-220d5f249477" Jan 29 12:21:35 crc kubenswrapper[4840]: E0129 12:21:35.024673 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/lmiccini/nova-operator@sha256:5a7235a96194f43fbbbee4085b28e1749733862ce801ef67413f496a1e5826bc\\\"\"" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5" podUID="9d94bfa3-811b-4590-b407-220d5f249477" Jan 29 12:21:40 crc kubenswrapper[4840]: I0129 12:21:40.364199 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj"] Jan 29 12:21:40 crc kubenswrapper[4840]: I0129 12:21:40.419862 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w"] Jan 29 12:21:40 crc kubenswrapper[4840]: I0129 12:21:40.425153 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg"] Jan 29 12:21:40 crc kubenswrapper[4840]: W0129 12:21:40.576285 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod201f6c04_44f6_4d28_bb64_b6f99c322f55.slice/crio-5dbf33e5a3b4282e57e68423f383880a4cbf4186487e741b11d5068cf4d16f13 WatchSource:0}: Error finding container 5dbf33e5a3b4282e57e68423f383880a4cbf4186487e741b11d5068cf4d16f13: Status 404 returned error can't find the container with id 5dbf33e5a3b4282e57e68423f383880a4cbf4186487e741b11d5068cf4d16f13 Jan 29 12:21:40 crc kubenswrapper[4840]: W0129 12:21:40.579902 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd19fb200_1d61_4334_b6aa_7b45c8b79502.slice/crio-492786fb83a8c27d09e5204d85d832830199a12c6b1add28d24093365d73dee0 WatchSource:0}: Error finding container 492786fb83a8c27d09e5204d85d832830199a12c6b1add28d24093365d73dee0: Status 404 returned error can't find the container with id 492786fb83a8c27d09e5204d85d832830199a12c6b1add28d24093365d73dee0 Jan 29 12:21:40 crc kubenswrapper[4840]: W0129 12:21:40.581294 4840 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddceb14fc_bfee_481f_82dc_c0f60d39f650.slice/crio-977aadf09e49f27bac21aae90393fe8ee614ea1cd152ce4cdfb87cba9d3f8bf5 WatchSource:0}: Error finding container 977aadf09e49f27bac21aae90393fe8ee614ea1cd152ce4cdfb87cba9d3f8bf5: Status 404 returned error can't find the container with id 977aadf09e49f27bac21aae90393fe8ee614ea1cd152ce4cdfb87cba9d3f8bf5 Jan 29 12:21:40 crc kubenswrapper[4840]: I0129 12:21:40.590117 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.060978 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v" event={"ID":"a2708669-810e-4c1f-8843-eb738dfec7e9","Type":"ContainerStarted","Data":"61b69a36d343017ed08f6de7232ac08e4203109c001c9fb9e5bec9ae2d0eb1c6"} Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.061435 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v" Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.076846 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" event={"ID":"dceb14fc-bfee-481f-82dc-c0f60d39f650","Type":"ContainerStarted","Data":"977aadf09e49f27bac21aae90393fe8ee614ea1cd152ce4cdfb87cba9d3f8bf5"} Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.081916 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k" event={"ID":"ef7c6b60-290c-4ff4-b3df-b9bebd38cd07","Type":"ContainerStarted","Data":"661549edc8aa8636b71901d17f6fe40bb566fb9da903f24b2b62079d43703b4a"} Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.082145 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k" Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.085559 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v" podStartSLOduration=8.22125522 podStartE2EDuration="29.085540038s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:14.47838936 +0000 UTC m=+1006.141369253" lastFinishedPulling="2026-01-29 12:21:35.342674178 +0000 UTC m=+1027.005654071" observedRunningTime="2026-01-29 12:21:41.078222421 +0000 UTC m=+1032.741202314" watchObservedRunningTime="2026-01-29 12:21:41.085540038 +0000 UTC m=+1032.748519931" Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.086179 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n" event={"ID":"3928ea1f-3fa6-43b2-abe9-1f0554100f8c","Type":"ContainerStarted","Data":"bad895df69244d36e76550966a6c36014eecb8364e7e2edcef8dd96ca23b9e27"} Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.086227 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n" Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.092108 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" event={"ID":"201f6c04-44f6-4d28-bb64-b6f99c322f55","Type":"ContainerStarted","Data":"5dbf33e5a3b4282e57e68423f383880a4cbf4186487e741b11d5068cf4d16f13"} Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.104631 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" event={"ID":"d19fb200-1d61-4334-b6aa-7b45c8b79502","Type":"ContainerStarted","Data":"492786fb83a8c27d09e5204d85d832830199a12c6b1add28d24093365d73dee0"} Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.121938 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt" event={"ID":"3ea9bf9d-883b-4e69-9ae6-6354a4004de3","Type":"ContainerStarted","Data":"f46ebf013097f5766dc7a05ff1c04b2ba0291075a91f53da509d1cc1d641c955"} Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.122668 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt" Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.130165 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n" podStartSLOduration=8.836981541 podStartE2EDuration="29.13014951s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:15.049433847 +0000 UTC m=+1006.712413740" lastFinishedPulling="2026-01-29 12:21:35.342601806 +0000 UTC m=+1027.005581709" observedRunningTime="2026-01-29 12:21:41.129281366 +0000 UTC m=+1032.792261259" watchObservedRunningTime="2026-01-29 12:21:41.13014951 +0000 UTC m=+1032.793129393" Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.133326 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg" event={"ID":"e6ee0757-467e-4f45-a612-0c6db645276b","Type":"ContainerStarted","Data":"42f1c1c941552d8fef6f16037fb8ff86d57421cc1b7daaf9eebfa3cdfca0b7b6"} Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.133996 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg" Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.142564 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k" podStartSLOduration=9.552784724 podStartE2EDuration="29.142544064s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:14.988750442 +0000 UTC m=+1006.651730335" lastFinishedPulling="2026-01-29 12:21:34.578509782 +0000 UTC m=+1026.241489675" observedRunningTime="2026-01-29 12:21:41.105209188 +0000 UTC m=+1032.768189091" watchObservedRunningTime="2026-01-29 12:21:41.142544064 +0000 UTC m=+1032.805523957" Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.150311 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt" podStartSLOduration=9.427635104 podStartE2EDuration="29.150289772s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:14.855783802 +0000 UTC m=+1006.518763685" lastFinishedPulling="2026-01-29 12:21:34.57843846 +0000 UTC m=+1026.241418353" observedRunningTime="2026-01-29 12:21:41.149821989 +0000 UTC m=+1032.812801892" watchObservedRunningTime="2026-01-29 12:21:41.150289772 +0000 UTC m=+1032.813269665" Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.154173 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq" event={"ID":"6a18af7c-6e6a-495b-88a7-7c44230a5ead","Type":"ContainerStarted","Data":"dfa2a51bcdf7f47c832db3e789d5c96f8c613dd0ea2785c8840c220d4c89e063"} Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.155107 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq" Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.189384 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg" podStartSLOduration=9.535742825 podStartE2EDuration="29.189362854s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:14.925267303 +0000 UTC m=+1006.588247196" lastFinishedPulling="2026-01-29 12:21:34.578887332 +0000 UTC m=+1026.241867225" observedRunningTime="2026-01-29 12:21:41.171975446 +0000 UTC m=+1032.834955349" watchObservedRunningTime="2026-01-29 12:21:41.189362854 +0000 UTC m=+1032.852342747" Jan 29 12:21:41 crc kubenswrapper[4840]: I0129 12:21:41.203007 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq" podStartSLOduration=9.48543996 podStartE2EDuration="29.202992221s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:14.860998972 +0000 UTC m=+1006.523978865" lastFinishedPulling="2026-01-29 12:21:34.578551233 +0000 UTC m=+1026.241531126" observedRunningTime="2026-01-29 12:21:41.199463566 +0000 UTC m=+1032.862443459" watchObservedRunningTime="2026-01-29 12:21:41.202992221 +0000 UTC m=+1032.865972114" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.176669 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" event={"ID":"201f6c04-44f6-4d28-bb64-b6f99c322f55","Type":"ContainerStarted","Data":"208bd58930c05032ca051298e8454dd92792016dce10dff8103ead4d3d7aca6a"} Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.177636 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.189437 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" event={"ID":"7484cebd-11a3-4740-9791-dd3949b8dcaa","Type":"ContainerStarted","Data":"73d33718ea30709d605f70764d5fb1c3b80516ed0dd92cd69df008d362dcee28"} Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.190241 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.206022 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" event={"ID":"7671ead5-a005-40a7-b132-adda6935e9b8","Type":"ContainerStarted","Data":"5897dd25adc35b4403b4b2433ecfb9a2eac7cb1101aef245215b60aa4886b12e"} Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.206553 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.226012 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt" event={"ID":"7f388a54-98a3-410d-a742-c6e4501b70e0","Type":"ContainerStarted","Data":"e2b8f5859c70b7b932b7ab5f2b6e7f171a0bc4c0c9f8d60bc9ef8df3b03e54b7"} Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.226664 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.229124 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" event={"ID":"06f49550-e51b-4284-b192-ff5829c2bfaf","Type":"ContainerStarted","Data":"baecd3f6eda20a4f09bb26a905d75cd4443b9f5123db602db86f342488e02885"} Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.229439 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.240174 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5" event={"ID":"8a01e8b6-1daa-48ba-98e1-1c99f6c4f5e0","Type":"ContainerStarted","Data":"2836c8fd51db5f73058d4518ae09f1972793426529c81ac9e3a84afe0c2a67ad"} Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.240767 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.251148 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4" event={"ID":"bfc51868-5269-40cb-b17f-054309802b44","Type":"ContainerStarted","Data":"fa8f0165dfae630c567800ae8955ed04ee6dd52bcce2c0e5d726dcd1a1638fa0"} Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.251338 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" podStartSLOduration=3.739141777 podStartE2EDuration="29.25131981s" podCreationTimestamp="2026-01-29 12:21:13 +0000 UTC" firstStartedPulling="2026-01-29 12:21:15.080896534 +0000 UTC m=+1006.743876427" lastFinishedPulling="2026-01-29 12:21:40.593074567 +0000 UTC m=+1032.256054460" observedRunningTime="2026-01-29 12:21:42.249396408 +0000 UTC m=+1033.912376321" watchObservedRunningTime="2026-01-29 12:21:42.25131981 +0000 UTC m=+1033.914299703" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.251788 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" podStartSLOduration=29.251783772 podStartE2EDuration="29.251783772s" podCreationTimestamp="2026-01-29 12:21:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-29 12:21:42.219843562 +0000 UTC m=+1033.882823455" watchObservedRunningTime="2026-01-29 12:21:42.251783772 +0000 UTC m=+1033.914763665" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.257365 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" event={"ID":"bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5","Type":"ContainerStarted","Data":"9a7b29ffc6f108af78cbcd925ec496f20fe9c53d5893ab58648db1dfadb7da0b"} Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.258072 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.263543 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" event={"ID":"882d0d06-c72a-46d5-985b-578094eedc4c","Type":"ContainerStarted","Data":"3e0a279fbd442adcff18a20f0fe410ede3979cd37f808aa55c2888eef6244739"} Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.264122 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.267629 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" podStartSLOduration=4.723132234 podStartE2EDuration="30.267613458s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:15.057489143 +0000 UTC m=+1006.720469036" lastFinishedPulling="2026-01-29 12:21:40.601970367 +0000 UTC m=+1032.264950260" observedRunningTime="2026-01-29 12:21:42.264432603 +0000 UTC m=+1033.927412486" watchObservedRunningTime="2026-01-29 12:21:42.267613458 +0000 UTC m=+1033.930593361" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.270552 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" event={"ID":"7dca7ac4-5015-46ef-aa82-15fea812cca2","Type":"ContainerStarted","Data":"8fdac1a59959f23624e0fbeb317f60447485038cdb1e0fce6c9cd9eb01a360ba"} Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.271041 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.289244 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5" podStartSLOduration=9.325049993 podStartE2EDuration="30.28921776s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:14.37848935 +0000 UTC m=+1006.041469243" lastFinishedPulling="2026-01-29 12:21:35.342657117 +0000 UTC m=+1027.005637010" observedRunningTime="2026-01-29 12:21:42.288342176 +0000 UTC m=+1033.951322079" watchObservedRunningTime="2026-01-29 12:21:42.28921776 +0000 UTC m=+1033.952197653" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.345913 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-lm2m4" podStartSLOduration=3.574667956 podStartE2EDuration="29.345886066s" podCreationTimestamp="2026-01-29 12:21:13 +0000 UTC" firstStartedPulling="2026-01-29 12:21:15.067229405 +0000 UTC m=+1006.730209298" lastFinishedPulling="2026-01-29 12:21:40.838447515 +0000 UTC m=+1032.501427408" observedRunningTime="2026-01-29 12:21:42.321329065 +0000 UTC m=+1033.984308958" watchObservedRunningTime="2026-01-29 12:21:42.345886066 +0000 UTC m=+1034.008865959" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.346740 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" podStartSLOduration=3.866873335 podStartE2EDuration="29.346733779s" podCreationTimestamp="2026-01-29 12:21:13 +0000 UTC" firstStartedPulling="2026-01-29 12:21:15.081373797 +0000 UTC m=+1006.744353690" lastFinishedPulling="2026-01-29 12:21:40.561234241 +0000 UTC m=+1032.224214134" observedRunningTime="2026-01-29 12:21:42.345480425 +0000 UTC m=+1034.008460348" watchObservedRunningTime="2026-01-29 12:21:42.346733779 +0000 UTC m=+1034.009713672" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.375691 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt" podStartSLOduration=9.501888853 podStartE2EDuration="30.375657797s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:14.469510271 +0000 UTC m=+1006.132490164" lastFinishedPulling="2026-01-29 12:21:35.343279215 +0000 UTC m=+1027.006259108" observedRunningTime="2026-01-29 12:21:42.363270574 +0000 UTC m=+1034.026250467" watchObservedRunningTime="2026-01-29 12:21:42.375657797 +0000 UTC m=+1034.038637690" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.395875 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" podStartSLOduration=3.86740058 podStartE2EDuration="29.395851152s" podCreationTimestamp="2026-01-29 12:21:13 +0000 UTC" firstStartedPulling="2026-01-29 12:21:15.061386668 +0000 UTC m=+1006.724366561" lastFinishedPulling="2026-01-29 12:21:40.58983724 +0000 UTC m=+1032.252817133" observedRunningTime="2026-01-29 12:21:42.393986612 +0000 UTC m=+1034.056966525" watchObservedRunningTime="2026-01-29 12:21:42.395851152 +0000 UTC m=+1034.058831045" Jan 29 12:21:42 crc kubenswrapper[4840]: I0129 12:21:42.440925 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" podStartSLOduration=4.92794678 podStartE2EDuration="30.440869464s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:15.076933867 +0000 UTC m=+1006.739913760" lastFinishedPulling="2026-01-29 12:21:40.589856551 +0000 UTC m=+1032.252836444" observedRunningTime="2026-01-29 12:21:42.424662267 +0000 UTC m=+1034.087642190" watchObservedRunningTime="2026-01-29 12:21:42.440869464 +0000 UTC m=+1034.103849347" Jan 29 12:21:44 crc kubenswrapper[4840]: I0129 12:21:44.021679 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" podStartSLOduration=5.50701115 podStartE2EDuration="31.02165877s" podCreationTimestamp="2026-01-29 12:21:13 +0000 UTC" firstStartedPulling="2026-01-29 12:21:15.075307283 +0000 UTC m=+1006.738287176" lastFinishedPulling="2026-01-29 12:21:40.589954903 +0000 UTC m=+1032.252934796" observedRunningTime="2026-01-29 12:21:42.461440628 +0000 UTC m=+1034.124420521" watchObservedRunningTime="2026-01-29 12:21:44.02165877 +0000 UTC m=+1035.684638663" Jan 29 12:21:45 crc kubenswrapper[4840]: I0129 12:21:45.308122 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62" event={"ID":"2dc9b4aa-090d-43e7-b7d0-060d29ae213b","Type":"ContainerStarted","Data":"8c5959bef451eae5ba7f4af7c51ba7c9f7abd4855ee8ce5a7b9cb1d80b950ff1"} Jan 29 12:21:45 crc kubenswrapper[4840]: I0129 12:21:45.309377 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62" Jan 29 12:21:45 crc kubenswrapper[4840]: I0129 12:21:45.311099 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" event={"ID":"dceb14fc-bfee-481f-82dc-c0f60d39f650","Type":"ContainerStarted","Data":"796ac25aba35af2fd3d62d36dd4543b4a7c47c79559779c5a4dd21fc5d758768"} Jan 29 12:21:45 crc kubenswrapper[4840]: I0129 12:21:45.311708 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:45 crc kubenswrapper[4840]: I0129 12:21:45.313467 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" event={"ID":"d19fb200-1d61-4334-b6aa-7b45c8b79502","Type":"ContainerStarted","Data":"d4e583f699bd1cce71027bd3aa7014e05ce5c53ff0f0f5a2aea15dc1e3ef2f82"} Jan 29 12:21:45 crc kubenswrapper[4840]: I0129 12:21:45.313611 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:21:45 crc kubenswrapper[4840]: I0129 12:21:45.331200 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62" podStartSLOduration=3.484372198 podStartE2EDuration="33.331177041s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:14.925604442 +0000 UTC m=+1006.588584335" lastFinishedPulling="2026-01-29 12:21:44.772409285 +0000 UTC m=+1036.435389178" observedRunningTime="2026-01-29 12:21:45.328789758 +0000 UTC m=+1036.991769651" watchObservedRunningTime="2026-01-29 12:21:45.331177041 +0000 UTC m=+1036.994156934" Jan 29 12:21:45 crc kubenswrapper[4840]: I0129 12:21:45.352448 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" podStartSLOduration=29.171473482 podStartE2EDuration="33.352425663s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:40.589768859 +0000 UTC m=+1032.252748752" lastFinishedPulling="2026-01-29 12:21:44.77072104 +0000 UTC m=+1036.433700933" observedRunningTime="2026-01-29 12:21:45.352075954 +0000 UTC m=+1037.015055847" watchObservedRunningTime="2026-01-29 12:21:45.352425663 +0000 UTC m=+1037.015405556" Jan 29 12:21:45 crc kubenswrapper[4840]: I0129 12:21:45.371307 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" podStartSLOduration=29.187774802 podStartE2EDuration="33.371284912s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:40.590623392 +0000 UTC m=+1032.253603285" lastFinishedPulling="2026-01-29 12:21:44.774133502 +0000 UTC m=+1036.437113395" observedRunningTime="2026-01-29 12:21:45.367214922 +0000 UTC m=+1037.030194825" watchObservedRunningTime="2026-01-29 12:21:45.371284912 +0000 UTC m=+1037.034264805" Jan 29 12:21:46 crc kubenswrapper[4840]: I0129 12:21:46.334615 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt" event={"ID":"8c034357-fd7c-4ff7-9b61-5a8491a6b34a","Type":"ContainerStarted","Data":"8a27ec89a62fcd44494079b0aed49a3e327bffa7f627fa81313d98c35cc6bb1f"} Jan 29 12:21:46 crc kubenswrapper[4840]: I0129 12:21:46.335089 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt" Jan 29 12:21:46 crc kubenswrapper[4840]: I0129 12:21:46.355430 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt" podStartSLOduration=2.452560072 podStartE2EDuration="33.355410332s" podCreationTimestamp="2026-01-29 12:21:13 +0000 UTC" firstStartedPulling="2026-01-29 12:21:15.011783833 +0000 UTC m=+1006.674763726" lastFinishedPulling="2026-01-29 12:21:45.914634093 +0000 UTC m=+1037.577613986" observedRunningTime="2026-01-29 12:21:46.348531126 +0000 UTC m=+1038.011511019" watchObservedRunningTime="2026-01-29 12:21:46.355410332 +0000 UTC m=+1038.018390225" Jan 29 12:21:48 crc kubenswrapper[4840]: I0129 12:21:48.347327 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5" event={"ID":"9d94bfa3-811b-4590-b407-220d5f249477","Type":"ContainerStarted","Data":"52419302f023ed9a240b62443414264571c6956f008607fabdd60fcd885d5924"} Jan 29 12:21:48 crc kubenswrapper[4840]: I0129 12:21:48.347829 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5" Jan 29 12:21:48 crc kubenswrapper[4840]: I0129 12:21:48.365197 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5" podStartSLOduration=3.545206706 podStartE2EDuration="36.365173259s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:15.050495495 +0000 UTC m=+1006.713475388" lastFinishedPulling="2026-01-29 12:21:47.870462048 +0000 UTC m=+1039.533441941" observedRunningTime="2026-01-29 12:21:48.362087126 +0000 UTC m=+1040.025067039" watchObservedRunningTime="2026-01-29 12:21:48.365173259 +0000 UTC m=+1040.028153152" Jan 29 12:21:49 crc kubenswrapper[4840]: I0129 12:21:49.355258 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7" event={"ID":"66a68829-6bd5-4912-9ee1-532cfd70df6e","Type":"ContainerStarted","Data":"ad97d07cec03873695b08dcf6a5711bf6f43b9b56f9a796236dcbcac942b6bd6"} Jan 29 12:21:49 crc kubenswrapper[4840]: I0129 12:21:49.355815 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7" Jan 29 12:21:49 crc kubenswrapper[4840]: I0129 12:21:49.421342 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7" podStartSLOduration=3.7317054389999997 podStartE2EDuration="37.421321768s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:14.989094932 +0000 UTC m=+1006.652074825" lastFinishedPulling="2026-01-29 12:21:48.678711261 +0000 UTC m=+1040.341691154" observedRunningTime="2026-01-29 12:21:49.41769789 +0000 UTC m=+1041.080677813" watchObservedRunningTime="2026-01-29 12:21:49.421321768 +0000 UTC m=+1041.084301661" Jan 29 12:21:49 crc kubenswrapper[4840]: I0129 12:21:49.487877 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg" Jan 29 12:21:49 crc kubenswrapper[4840]: I0129 12:21:49.680472 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-65dc8f5954-5vmnj" Jan 29 12:21:50 crc kubenswrapper[4840]: I0129 12:21:50.361792 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99" event={"ID":"bfedf099-a451-4d2f-b473-4c7756870b55","Type":"ContainerStarted","Data":"b8afd342717e675fa3b92862e9498b8be366a1045524889d9c0ccdba7f0b2550"} Jan 29 12:21:50 crc kubenswrapper[4840]: I0129 12:21:50.362076 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99" Jan 29 12:21:50 crc kubenswrapper[4840]: I0129 12:21:50.378164 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99" podStartSLOduration=3.921838818 podStartE2EDuration="38.378144633s" podCreationTimestamp="2026-01-29 12:21:12 +0000 UTC" firstStartedPulling="2026-01-29 12:21:15.050461424 +0000 UTC m=+1006.713441317" lastFinishedPulling="2026-01-29 12:21:49.506767239 +0000 UTC m=+1041.169747132" observedRunningTime="2026-01-29 12:21:50.375135282 +0000 UTC m=+1042.038115185" watchObservedRunningTime="2026-01-29 12:21:50.378144633 +0000 UTC m=+1042.041124526" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.016665 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-657667746d-k2jnt" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.038797 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7595cf584-blgz5" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.089406 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-6db5dbd896-rxp7k" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.119777 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-55d5d5f8ff-xkk5v" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.170915 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5499bccc75-cbdm7" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.321928 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-q5m62" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.382456 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-56cb7c4b4c-m46ns" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.440112 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6475bdcbc4-tf2sq" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.483326 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-k7s4n" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.508743 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-55df775b69-7lcnz" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.525424 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5ccd5b7f8f-c6hc5" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.553615 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6b855b4fc4-mrvgg" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.587803 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-2gldt" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.637658 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-znb4m" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.694651 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-c95fd9dc5-zjkxq" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.729923 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-ktps6" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.830160 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-6f7455757b-qgrxt" Jan 29 12:21:53 crc kubenswrapper[4840]: I0129 12:21:53.972730 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-56b5dc77fd-7lx5t" Jan 29 12:21:58 crc kubenswrapper[4840]: I0129 12:21:58.972676 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wcq6w" Jan 29 12:22:03 crc kubenswrapper[4840]: I0129 12:22:03.417101 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-77bb7ffb8c-qht99" Jan 29 12:22:53 crc kubenswrapper[4840]: I0129 12:22:53.521811 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:22:53 crc kubenswrapper[4840]: I0129 12:22:53.522456 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:23:23 crc kubenswrapper[4840]: I0129 12:23:23.521719 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:23:23 crc kubenswrapper[4840]: I0129 12:23:23.523052 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:23:53 crc kubenswrapper[4840]: I0129 12:23:53.522536 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:23:53 crc kubenswrapper[4840]: I0129 12:23:53.524390 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:23:53 crc kubenswrapper[4840]: I0129 12:23:53.524493 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:23:53 crc kubenswrapper[4840]: I0129 12:23:53.525817 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a77c6788ba01603f527778b3118f3ab273c4bc5625cfa6fbdc57f992f241d646"} pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:23:53 crc kubenswrapper[4840]: I0129 12:23:53.525898 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" containerID="cri-o://a77c6788ba01603f527778b3118f3ab273c4bc5625cfa6fbdc57f992f241d646" gracePeriod=600 Jan 29 12:23:54 crc kubenswrapper[4840]: I0129 12:23:54.283197 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerID="a77c6788ba01603f527778b3118f3ab273c4bc5625cfa6fbdc57f992f241d646" exitCode=0 Jan 29 12:23:54 crc kubenswrapper[4840]: I0129 12:23:54.283258 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerDied","Data":"a77c6788ba01603f527778b3118f3ab273c4bc5625cfa6fbdc57f992f241d646"} Jan 29 12:23:54 crc kubenswrapper[4840]: I0129 12:23:54.283679 4840 scope.go:117] "RemoveContainer" containerID="7baefb030f501ff291c20d387cd99f80bc3c294136bd6bcef30704b577ae25fb" Jan 29 12:23:55 crc kubenswrapper[4840]: I0129 12:23:55.294491 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"e7a7ebfab7cd45ce38d568bdcbff7fcfd159aadab4691a129d4f3024b4e7f9ba"} Jan 29 12:26:23 crc kubenswrapper[4840]: I0129 12:26:23.521626 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:26:23 crc kubenswrapper[4840]: I0129 12:26:23.522523 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:26:53 crc kubenswrapper[4840]: I0129 12:26:53.521844 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:26:53 crc kubenswrapper[4840]: I0129 12:26:53.522490 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:27:23 crc kubenswrapper[4840]: I0129 12:27:23.521607 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:27:23 crc kubenswrapper[4840]: I0129 12:27:23.523564 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:27:23 crc kubenswrapper[4840]: I0129 12:27:23.523887 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:27:23 crc kubenswrapper[4840]: I0129 12:27:23.524745 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e7a7ebfab7cd45ce38d568bdcbff7fcfd159aadab4691a129d4f3024b4e7f9ba"} pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:27:23 crc kubenswrapper[4840]: I0129 12:27:23.524910 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" containerID="cri-o://e7a7ebfab7cd45ce38d568bdcbff7fcfd159aadab4691a129d4f3024b4e7f9ba" gracePeriod=600 Jan 29 12:27:24 crc kubenswrapper[4840]: I0129 12:27:24.082811 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerID="e7a7ebfab7cd45ce38d568bdcbff7fcfd159aadab4691a129d4f3024b4e7f9ba" exitCode=0 Jan 29 12:27:24 crc kubenswrapper[4840]: I0129 12:27:24.082858 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerDied","Data":"e7a7ebfab7cd45ce38d568bdcbff7fcfd159aadab4691a129d4f3024b4e7f9ba"} Jan 29 12:27:24 crc kubenswrapper[4840]: I0129 12:27:24.082892 4840 scope.go:117] "RemoveContainer" containerID="a77c6788ba01603f527778b3118f3ab273c4bc5625cfa6fbdc57f992f241d646" Jan 29 12:27:25 crc kubenswrapper[4840]: I0129 12:27:25.090824 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4"} Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.129191 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mkcxf"] Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.132216 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.148899 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mkcxf"] Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.297733 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/523332f5-6d59-40b4-9ed1-2d511e7e674b-catalog-content\") pod \"redhat-operators-mkcxf\" (UID: \"523332f5-6d59-40b4-9ed1-2d511e7e674b\") " pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.297779 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rwtd\" (UniqueName: \"kubernetes.io/projected/523332f5-6d59-40b4-9ed1-2d511e7e674b-kube-api-access-7rwtd\") pod \"redhat-operators-mkcxf\" (UID: \"523332f5-6d59-40b4-9ed1-2d511e7e674b\") " pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.297821 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/523332f5-6d59-40b4-9ed1-2d511e7e674b-utilities\") pod \"redhat-operators-mkcxf\" (UID: \"523332f5-6d59-40b4-9ed1-2d511e7e674b\") " pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.398618 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/523332f5-6d59-40b4-9ed1-2d511e7e674b-catalog-content\") pod \"redhat-operators-mkcxf\" (UID: \"523332f5-6d59-40b4-9ed1-2d511e7e674b\") " pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.398673 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rwtd\" (UniqueName: \"kubernetes.io/projected/523332f5-6d59-40b4-9ed1-2d511e7e674b-kube-api-access-7rwtd\") pod \"redhat-operators-mkcxf\" (UID: \"523332f5-6d59-40b4-9ed1-2d511e7e674b\") " pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.398718 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/523332f5-6d59-40b4-9ed1-2d511e7e674b-utilities\") pod \"redhat-operators-mkcxf\" (UID: \"523332f5-6d59-40b4-9ed1-2d511e7e674b\") " pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.399212 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/523332f5-6d59-40b4-9ed1-2d511e7e674b-catalog-content\") pod \"redhat-operators-mkcxf\" (UID: \"523332f5-6d59-40b4-9ed1-2d511e7e674b\") " pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.399254 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/523332f5-6d59-40b4-9ed1-2d511e7e674b-utilities\") pod \"redhat-operators-mkcxf\" (UID: \"523332f5-6d59-40b4-9ed1-2d511e7e674b\") " pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.424408 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rwtd\" (UniqueName: \"kubernetes.io/projected/523332f5-6d59-40b4-9ed1-2d511e7e674b-kube-api-access-7rwtd\") pod \"redhat-operators-mkcxf\" (UID: \"523332f5-6d59-40b4-9ed1-2d511e7e674b\") " pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.475600 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:22 crc kubenswrapper[4840]: I0129 12:28:22.747431 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mkcxf"] Jan 29 12:28:23 crc kubenswrapper[4840]: I0129 12:28:23.591009 4840 generic.go:334] "Generic (PLEG): container finished" podID="523332f5-6d59-40b4-9ed1-2d511e7e674b" containerID="e79c26bfcdc9fff5bdf151cff0992c1fdf2b6eb702ee7eed04b67f8e18054523" exitCode=0 Jan 29 12:28:23 crc kubenswrapper[4840]: I0129 12:28:23.591374 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkcxf" event={"ID":"523332f5-6d59-40b4-9ed1-2d511e7e674b","Type":"ContainerDied","Data":"e79c26bfcdc9fff5bdf151cff0992c1fdf2b6eb702ee7eed04b67f8e18054523"} Jan 29 12:28:23 crc kubenswrapper[4840]: I0129 12:28:23.591411 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkcxf" event={"ID":"523332f5-6d59-40b4-9ed1-2d511e7e674b","Type":"ContainerStarted","Data":"3cb6bde1f22a113e1cf72d66768ec99029d2f6a25427cebcd2419bce1d078b70"} Jan 29 12:28:23 crc kubenswrapper[4840]: I0129 12:28:23.593319 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:28:25 crc kubenswrapper[4840]: I0129 12:28:25.610819 4840 generic.go:334] "Generic (PLEG): container finished" podID="523332f5-6d59-40b4-9ed1-2d511e7e674b" containerID="498b00792db81b48ff9ed597eb46877e708658814e8e1e149a0593b1933c659b" exitCode=0 Jan 29 12:28:25 crc kubenswrapper[4840]: I0129 12:28:25.611276 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkcxf" event={"ID":"523332f5-6d59-40b4-9ed1-2d511e7e674b","Type":"ContainerDied","Data":"498b00792db81b48ff9ed597eb46877e708658814e8e1e149a0593b1933c659b"} Jan 29 12:28:26 crc kubenswrapper[4840]: I0129 12:28:26.621887 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkcxf" event={"ID":"523332f5-6d59-40b4-9ed1-2d511e7e674b","Type":"ContainerStarted","Data":"905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539"} Jan 29 12:28:26 crc kubenswrapper[4840]: I0129 12:28:26.640489 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mkcxf" podStartSLOduration=2.187807535 podStartE2EDuration="4.640469207s" podCreationTimestamp="2026-01-29 12:28:22 +0000 UTC" firstStartedPulling="2026-01-29 12:28:23.592999662 +0000 UTC m=+1435.255979555" lastFinishedPulling="2026-01-29 12:28:26.045661334 +0000 UTC m=+1437.708641227" observedRunningTime="2026-01-29 12:28:26.638308439 +0000 UTC m=+1438.301288352" watchObservedRunningTime="2026-01-29 12:28:26.640469207 +0000 UTC m=+1438.303449100" Jan 29 12:28:32 crc kubenswrapper[4840]: I0129 12:28:32.476335 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:32 crc kubenswrapper[4840]: I0129 12:28:32.476943 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:32 crc kubenswrapper[4840]: I0129 12:28:32.550817 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:32 crc kubenswrapper[4840]: I0129 12:28:32.764328 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:32 crc kubenswrapper[4840]: I0129 12:28:32.819508 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mkcxf"] Jan 29 12:28:34 crc kubenswrapper[4840]: I0129 12:28:34.685546 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mkcxf" podUID="523332f5-6d59-40b4-9ed1-2d511e7e674b" containerName="registry-server" containerID="cri-o://905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539" gracePeriod=2 Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.591668 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.697745 4840 generic.go:334] "Generic (PLEG): container finished" podID="523332f5-6d59-40b4-9ed1-2d511e7e674b" containerID="905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539" exitCode=0 Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.697805 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkcxf" event={"ID":"523332f5-6d59-40b4-9ed1-2d511e7e674b","Type":"ContainerDied","Data":"905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539"} Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.698202 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/523332f5-6d59-40b4-9ed1-2d511e7e674b-catalog-content\") pod \"523332f5-6d59-40b4-9ed1-2d511e7e674b\" (UID: \"523332f5-6d59-40b4-9ed1-2d511e7e674b\") " Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.698247 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/523332f5-6d59-40b4-9ed1-2d511e7e674b-utilities\") pod \"523332f5-6d59-40b4-9ed1-2d511e7e674b\" (UID: \"523332f5-6d59-40b4-9ed1-2d511e7e674b\") " Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.697853 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mkcxf" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.698380 4840 scope.go:117] "RemoveContainer" containerID="905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.698651 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rwtd\" (UniqueName: \"kubernetes.io/projected/523332f5-6d59-40b4-9ed1-2d511e7e674b-kube-api-access-7rwtd\") pod \"523332f5-6d59-40b4-9ed1-2d511e7e674b\" (UID: \"523332f5-6d59-40b4-9ed1-2d511e7e674b\") " Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.699485 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/523332f5-6d59-40b4-9ed1-2d511e7e674b-utilities" (OuterVolumeSpecName: "utilities") pod "523332f5-6d59-40b4-9ed1-2d511e7e674b" (UID: "523332f5-6d59-40b4-9ed1-2d511e7e674b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.699653 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mkcxf" event={"ID":"523332f5-6d59-40b4-9ed1-2d511e7e674b","Type":"ContainerDied","Data":"3cb6bde1f22a113e1cf72d66768ec99029d2f6a25427cebcd2419bce1d078b70"} Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.705229 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523332f5-6d59-40b4-9ed1-2d511e7e674b-kube-api-access-7rwtd" (OuterVolumeSpecName: "kube-api-access-7rwtd") pod "523332f5-6d59-40b4-9ed1-2d511e7e674b" (UID: "523332f5-6d59-40b4-9ed1-2d511e7e674b"). InnerVolumeSpecName "kube-api-access-7rwtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.741763 4840 scope.go:117] "RemoveContainer" containerID="498b00792db81b48ff9ed597eb46877e708658814e8e1e149a0593b1933c659b" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.763147 4840 scope.go:117] "RemoveContainer" containerID="e79c26bfcdc9fff5bdf151cff0992c1fdf2b6eb702ee7eed04b67f8e18054523" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.791706 4840 scope.go:117] "RemoveContainer" containerID="905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539" Jan 29 12:28:35 crc kubenswrapper[4840]: E0129 12:28:35.792376 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539\": container with ID starting with 905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539 not found: ID does not exist" containerID="905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.792434 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539"} err="failed to get container status \"905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539\": rpc error: code = NotFound desc = could not find container \"905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539\": container with ID starting with 905d4534ac177fca1804f4728ecc6e5980fedae9e889444f75363f45e24d9539 not found: ID does not exist" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.792464 4840 scope.go:117] "RemoveContainer" containerID="498b00792db81b48ff9ed597eb46877e708658814e8e1e149a0593b1933c659b" Jan 29 12:28:35 crc kubenswrapper[4840]: E0129 12:28:35.793003 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498b00792db81b48ff9ed597eb46877e708658814e8e1e149a0593b1933c659b\": container with ID starting with 498b00792db81b48ff9ed597eb46877e708658814e8e1e149a0593b1933c659b not found: ID does not exist" containerID="498b00792db81b48ff9ed597eb46877e708658814e8e1e149a0593b1933c659b" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.793076 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498b00792db81b48ff9ed597eb46877e708658814e8e1e149a0593b1933c659b"} err="failed to get container status \"498b00792db81b48ff9ed597eb46877e708658814e8e1e149a0593b1933c659b\": rpc error: code = NotFound desc = could not find container \"498b00792db81b48ff9ed597eb46877e708658814e8e1e149a0593b1933c659b\": container with ID starting with 498b00792db81b48ff9ed597eb46877e708658814e8e1e149a0593b1933c659b not found: ID does not exist" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.793117 4840 scope.go:117] "RemoveContainer" containerID="e79c26bfcdc9fff5bdf151cff0992c1fdf2b6eb702ee7eed04b67f8e18054523" Jan 29 12:28:35 crc kubenswrapper[4840]: E0129 12:28:35.793488 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e79c26bfcdc9fff5bdf151cff0992c1fdf2b6eb702ee7eed04b67f8e18054523\": container with ID starting with e79c26bfcdc9fff5bdf151cff0992c1fdf2b6eb702ee7eed04b67f8e18054523 not found: ID does not exist" containerID="e79c26bfcdc9fff5bdf151cff0992c1fdf2b6eb702ee7eed04b67f8e18054523" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.793523 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79c26bfcdc9fff5bdf151cff0992c1fdf2b6eb702ee7eed04b67f8e18054523"} err="failed to get container status \"e79c26bfcdc9fff5bdf151cff0992c1fdf2b6eb702ee7eed04b67f8e18054523\": rpc error: code = NotFound desc = could not find container \"e79c26bfcdc9fff5bdf151cff0992c1fdf2b6eb702ee7eed04b67f8e18054523\": container with ID starting with e79c26bfcdc9fff5bdf151cff0992c1fdf2b6eb702ee7eed04b67f8e18054523 not found: ID does not exist" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.800883 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/523332f5-6d59-40b4-9ed1-2d511e7e674b-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.800925 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rwtd\" (UniqueName: \"kubernetes.io/projected/523332f5-6d59-40b4-9ed1-2d511e7e674b-kube-api-access-7rwtd\") on node \"crc\" DevicePath \"\"" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.869348 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/523332f5-6d59-40b4-9ed1-2d511e7e674b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "523332f5-6d59-40b4-9ed1-2d511e7e674b" (UID: "523332f5-6d59-40b4-9ed1-2d511e7e674b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:28:35 crc kubenswrapper[4840]: I0129 12:28:35.902504 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/523332f5-6d59-40b4-9ed1-2d511e7e674b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:28:36 crc kubenswrapper[4840]: I0129 12:28:36.035556 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mkcxf"] Jan 29 12:28:36 crc kubenswrapper[4840]: I0129 12:28:36.043440 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mkcxf"] Jan 29 12:28:37 crc kubenswrapper[4840]: I0129 12:28:37.010348 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="523332f5-6d59-40b4-9ed1-2d511e7e674b" path="/var/lib/kubelet/pods/523332f5-6d59-40b4-9ed1-2d511e7e674b/volumes" Jan 29 12:29:53 crc kubenswrapper[4840]: I0129 12:29:53.521741 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:29:53 crc kubenswrapper[4840]: I0129 12:29:53.522471 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.165568 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t"] Jan 29 12:30:00 crc kubenswrapper[4840]: E0129 12:30:00.166676 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523332f5-6d59-40b4-9ed1-2d511e7e674b" containerName="extract-content" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.166691 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="523332f5-6d59-40b4-9ed1-2d511e7e674b" containerName="extract-content" Jan 29 12:30:00 crc kubenswrapper[4840]: E0129 12:30:00.166707 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523332f5-6d59-40b4-9ed1-2d511e7e674b" containerName="extract-utilities" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.166715 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="523332f5-6d59-40b4-9ed1-2d511e7e674b" containerName="extract-utilities" Jan 29 12:30:00 crc kubenswrapper[4840]: E0129 12:30:00.166737 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523332f5-6d59-40b4-9ed1-2d511e7e674b" containerName="registry-server" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.166744 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="523332f5-6d59-40b4-9ed1-2d511e7e674b" containerName="registry-server" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.166893 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="523332f5-6d59-40b4-9ed1-2d511e7e674b" containerName="registry-server" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.167470 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.169882 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.170766 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t"] Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.171292 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.239500 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53373040-fe74-4776-8d62-42c8e31d02e0-config-volume\") pod \"collect-profiles-29494830-sz49t\" (UID: \"53373040-fe74-4776-8d62-42c8e31d02e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.239558 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp9vv\" (UniqueName: \"kubernetes.io/projected/53373040-fe74-4776-8d62-42c8e31d02e0-kube-api-access-vp9vv\") pod \"collect-profiles-29494830-sz49t\" (UID: \"53373040-fe74-4776-8d62-42c8e31d02e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.239843 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53373040-fe74-4776-8d62-42c8e31d02e0-secret-volume\") pod \"collect-profiles-29494830-sz49t\" (UID: \"53373040-fe74-4776-8d62-42c8e31d02e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.340707 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53373040-fe74-4776-8d62-42c8e31d02e0-secret-volume\") pod \"collect-profiles-29494830-sz49t\" (UID: \"53373040-fe74-4776-8d62-42c8e31d02e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.340783 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53373040-fe74-4776-8d62-42c8e31d02e0-config-volume\") pod \"collect-profiles-29494830-sz49t\" (UID: \"53373040-fe74-4776-8d62-42c8e31d02e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.340810 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp9vv\" (UniqueName: \"kubernetes.io/projected/53373040-fe74-4776-8d62-42c8e31d02e0-kube-api-access-vp9vv\") pod \"collect-profiles-29494830-sz49t\" (UID: \"53373040-fe74-4776-8d62-42c8e31d02e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.342312 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53373040-fe74-4776-8d62-42c8e31d02e0-config-volume\") pod \"collect-profiles-29494830-sz49t\" (UID: \"53373040-fe74-4776-8d62-42c8e31d02e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.348629 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53373040-fe74-4776-8d62-42c8e31d02e0-secret-volume\") pod \"collect-profiles-29494830-sz49t\" (UID: \"53373040-fe74-4776-8d62-42c8e31d02e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.358799 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp9vv\" (UniqueName: \"kubernetes.io/projected/53373040-fe74-4776-8d62-42c8e31d02e0-kube-api-access-vp9vv\") pod \"collect-profiles-29494830-sz49t\" (UID: \"53373040-fe74-4776-8d62-42c8e31d02e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.493493 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:00 crc kubenswrapper[4840]: I0129 12:30:00.935256 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t"] Jan 29 12:30:01 crc kubenswrapper[4840]: E0129 12:30:01.351113 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53373040_fe74_4776_8d62_42c8e31d02e0.slice/crio-e76cdf77624c90327a706b303bd35f84d7718a11dcac5e36144ebf6a8fd1352f.scope\": RecentStats: unable to find data in memory cache]" Jan 29 12:30:01 crc kubenswrapper[4840]: I0129 12:30:01.550629 4840 generic.go:334] "Generic (PLEG): container finished" podID="53373040-fe74-4776-8d62-42c8e31d02e0" containerID="e76cdf77624c90327a706b303bd35f84d7718a11dcac5e36144ebf6a8fd1352f" exitCode=0 Jan 29 12:30:01 crc kubenswrapper[4840]: I0129 12:30:01.550689 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" event={"ID":"53373040-fe74-4776-8d62-42c8e31d02e0","Type":"ContainerDied","Data":"e76cdf77624c90327a706b303bd35f84d7718a11dcac5e36144ebf6a8fd1352f"} Jan 29 12:30:01 crc kubenswrapper[4840]: I0129 12:30:01.550726 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" event={"ID":"53373040-fe74-4776-8d62-42c8e31d02e0","Type":"ContainerStarted","Data":"cc3946812d4992c29c065d1f790fd6a8b694d363321fa94fa1651cdcb6a02d5e"} Jan 29 12:30:02 crc kubenswrapper[4840]: I0129 12:30:02.845869 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:02 crc kubenswrapper[4840]: I0129 12:30:02.880382 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53373040-fe74-4776-8d62-42c8e31d02e0-config-volume\") pod \"53373040-fe74-4776-8d62-42c8e31d02e0\" (UID: \"53373040-fe74-4776-8d62-42c8e31d02e0\") " Jan 29 12:30:02 crc kubenswrapper[4840]: I0129 12:30:02.880527 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp9vv\" (UniqueName: \"kubernetes.io/projected/53373040-fe74-4776-8d62-42c8e31d02e0-kube-api-access-vp9vv\") pod \"53373040-fe74-4776-8d62-42c8e31d02e0\" (UID: \"53373040-fe74-4776-8d62-42c8e31d02e0\") " Jan 29 12:30:02 crc kubenswrapper[4840]: I0129 12:30:02.880641 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53373040-fe74-4776-8d62-42c8e31d02e0-secret-volume\") pod \"53373040-fe74-4776-8d62-42c8e31d02e0\" (UID: \"53373040-fe74-4776-8d62-42c8e31d02e0\") " Jan 29 12:30:02 crc kubenswrapper[4840]: I0129 12:30:02.881281 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53373040-fe74-4776-8d62-42c8e31d02e0-config-volume" (OuterVolumeSpecName: "config-volume") pod "53373040-fe74-4776-8d62-42c8e31d02e0" (UID: "53373040-fe74-4776-8d62-42c8e31d02e0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:30:02 crc kubenswrapper[4840]: I0129 12:30:02.887905 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53373040-fe74-4776-8d62-42c8e31d02e0-kube-api-access-vp9vv" (OuterVolumeSpecName: "kube-api-access-vp9vv") pod "53373040-fe74-4776-8d62-42c8e31d02e0" (UID: "53373040-fe74-4776-8d62-42c8e31d02e0"). InnerVolumeSpecName "kube-api-access-vp9vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:30:02 crc kubenswrapper[4840]: I0129 12:30:02.893350 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53373040-fe74-4776-8d62-42c8e31d02e0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "53373040-fe74-4776-8d62-42c8e31d02e0" (UID: "53373040-fe74-4776-8d62-42c8e31d02e0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:30:02 crc kubenswrapper[4840]: I0129 12:30:02.982043 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/53373040-fe74-4776-8d62-42c8e31d02e0-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:02 crc kubenswrapper[4840]: I0129 12:30:02.982599 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53373040-fe74-4776-8d62-42c8e31d02e0-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:02 crc kubenswrapper[4840]: I0129 12:30:02.982628 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp9vv\" (UniqueName: \"kubernetes.io/projected/53373040-fe74-4776-8d62-42c8e31d02e0-kube-api-access-vp9vv\") on node \"crc\" DevicePath \"\"" Jan 29 12:30:03 crc kubenswrapper[4840]: I0129 12:30:03.569597 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" event={"ID":"53373040-fe74-4776-8d62-42c8e31d02e0","Type":"ContainerDied","Data":"cc3946812d4992c29c065d1f790fd6a8b694d363321fa94fa1651cdcb6a02d5e"} Jan 29 12:30:03 crc kubenswrapper[4840]: I0129 12:30:03.569882 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc3946812d4992c29c065d1f790fd6a8b694d363321fa94fa1651cdcb6a02d5e" Jan 29 12:30:03 crc kubenswrapper[4840]: I0129 12:30:03.569685 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494830-sz49t" Jan 29 12:30:23 crc kubenswrapper[4840]: I0129 12:30:23.521879 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:30:23 crc kubenswrapper[4840]: I0129 12:30:23.524355 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:30:53 crc kubenswrapper[4840]: I0129 12:30:53.521915 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:30:53 crc kubenswrapper[4840]: I0129 12:30:53.522519 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:30:53 crc kubenswrapper[4840]: I0129 12:30:53.522562 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:30:53 crc kubenswrapper[4840]: I0129 12:30:53.523250 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4"} pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:30:53 crc kubenswrapper[4840]: I0129 12:30:53.523313 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" containerID="cri-o://e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" gracePeriod=600 Jan 29 12:30:53 crc kubenswrapper[4840]: I0129 12:30:53.962889 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" exitCode=0 Jan 29 12:30:53 crc kubenswrapper[4840]: I0129 12:30:53.962965 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerDied","Data":"e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4"} Jan 29 12:30:53 crc kubenswrapper[4840]: I0129 12:30:53.963306 4840 scope.go:117] "RemoveContainer" containerID="e7a7ebfab7cd45ce38d568bdcbff7fcfd159aadab4691a129d4f3024b4e7f9ba" Jan 29 12:30:54 crc kubenswrapper[4840]: E0129 12:30:54.177500 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:30:54 crc kubenswrapper[4840]: I0129 12:30:54.972576 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:30:54 crc kubenswrapper[4840]: E0129 12:30:54.972793 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:31:06 crc kubenswrapper[4840]: I0129 12:31:06.001853 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:31:06 crc kubenswrapper[4840]: E0129 12:31:06.003019 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.876063 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tzjnh"] Jan 29 12:31:14 crc kubenswrapper[4840]: E0129 12:31:14.877143 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53373040-fe74-4776-8d62-42c8e31d02e0" containerName="collect-profiles" Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.877158 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="53373040-fe74-4776-8d62-42c8e31d02e0" containerName="collect-profiles" Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.877286 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="53373040-fe74-4776-8d62-42c8e31d02e0" containerName="collect-profiles" Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.878406 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.892369 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tzjnh"] Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.897579 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bdb2298-990f-4adb-a3d2-39f61163af57-catalog-content\") pod \"certified-operators-tzjnh\" (UID: \"8bdb2298-990f-4adb-a3d2-39f61163af57\") " pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.897647 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q8h7\" (UniqueName: \"kubernetes.io/projected/8bdb2298-990f-4adb-a3d2-39f61163af57-kube-api-access-5q8h7\") pod \"certified-operators-tzjnh\" (UID: \"8bdb2298-990f-4adb-a3d2-39f61163af57\") " pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.897685 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bdb2298-990f-4adb-a3d2-39f61163af57-utilities\") pod \"certified-operators-tzjnh\" (UID: \"8bdb2298-990f-4adb-a3d2-39f61163af57\") " pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.998971 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bdb2298-990f-4adb-a3d2-39f61163af57-catalog-content\") pod \"certified-operators-tzjnh\" (UID: \"8bdb2298-990f-4adb-a3d2-39f61163af57\") " pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.999110 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q8h7\" (UniqueName: \"kubernetes.io/projected/8bdb2298-990f-4adb-a3d2-39f61163af57-kube-api-access-5q8h7\") pod \"certified-operators-tzjnh\" (UID: \"8bdb2298-990f-4adb-a3d2-39f61163af57\") " pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.999159 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bdb2298-990f-4adb-a3d2-39f61163af57-utilities\") pod \"certified-operators-tzjnh\" (UID: \"8bdb2298-990f-4adb-a3d2-39f61163af57\") " pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.999712 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bdb2298-990f-4adb-a3d2-39f61163af57-catalog-content\") pod \"certified-operators-tzjnh\" (UID: \"8bdb2298-990f-4adb-a3d2-39f61163af57\") " pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:14 crc kubenswrapper[4840]: I0129 12:31:14.999724 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bdb2298-990f-4adb-a3d2-39f61163af57-utilities\") pod \"certified-operators-tzjnh\" (UID: \"8bdb2298-990f-4adb-a3d2-39f61163af57\") " pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:15 crc kubenswrapper[4840]: I0129 12:31:15.024076 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q8h7\" (UniqueName: \"kubernetes.io/projected/8bdb2298-990f-4adb-a3d2-39f61163af57-kube-api-access-5q8h7\") pod \"certified-operators-tzjnh\" (UID: \"8bdb2298-990f-4adb-a3d2-39f61163af57\") " pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:15 crc kubenswrapper[4840]: I0129 12:31:15.199880 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:15 crc kubenswrapper[4840]: I0129 12:31:15.713510 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tzjnh"] Jan 29 12:31:16 crc kubenswrapper[4840]: I0129 12:31:16.150657 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bdb2298-990f-4adb-a3d2-39f61163af57" containerID="9a49aea5736a9f4c662f3239d2d753c9dea4da3a1f0dd2b6a0cf9130ba2935d2" exitCode=0 Jan 29 12:31:16 crc kubenswrapper[4840]: I0129 12:31:16.150725 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzjnh" event={"ID":"8bdb2298-990f-4adb-a3d2-39f61163af57","Type":"ContainerDied","Data":"9a49aea5736a9f4c662f3239d2d753c9dea4da3a1f0dd2b6a0cf9130ba2935d2"} Jan 29 12:31:16 crc kubenswrapper[4840]: I0129 12:31:16.151192 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzjnh" event={"ID":"8bdb2298-990f-4adb-a3d2-39f61163af57","Type":"ContainerStarted","Data":"de405400caf6caf8aa1565e53a7802eef2a05e88ca7637953a5d6f43a23980ad"} Jan 29 12:31:17 crc kubenswrapper[4840]: I0129 12:31:17.002163 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:31:17 crc kubenswrapper[4840]: E0129 12:31:17.002766 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:31:18 crc kubenswrapper[4840]: I0129 12:31:18.169262 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bdb2298-990f-4adb-a3d2-39f61163af57" containerID="a60b2f3d09ffcbe2bbe65023f6e831491ed282922707510e308545583aca43ca" exitCode=0 Jan 29 12:31:18 crc kubenswrapper[4840]: I0129 12:31:18.169485 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzjnh" event={"ID":"8bdb2298-990f-4adb-a3d2-39f61163af57","Type":"ContainerDied","Data":"a60b2f3d09ffcbe2bbe65023f6e831491ed282922707510e308545583aca43ca"} Jan 29 12:31:19 crc kubenswrapper[4840]: I0129 12:31:19.179085 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzjnh" event={"ID":"8bdb2298-990f-4adb-a3d2-39f61163af57","Type":"ContainerStarted","Data":"c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec"} Jan 29 12:31:25 crc kubenswrapper[4840]: I0129 12:31:25.201362 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:25 crc kubenswrapper[4840]: I0129 12:31:25.202198 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:25 crc kubenswrapper[4840]: I0129 12:31:25.241604 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:25 crc kubenswrapper[4840]: I0129 12:31:25.268010 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tzjnh" podStartSLOduration=8.805138122 podStartE2EDuration="11.267991762s" podCreationTimestamp="2026-01-29 12:31:14 +0000 UTC" firstStartedPulling="2026-01-29 12:31:16.153566642 +0000 UTC m=+1607.816546535" lastFinishedPulling="2026-01-29 12:31:18.616420282 +0000 UTC m=+1610.279400175" observedRunningTime="2026-01-29 12:31:19.199235746 +0000 UTC m=+1610.862215649" watchObservedRunningTime="2026-01-29 12:31:25.267991762 +0000 UTC m=+1616.930971655" Jan 29 12:31:25 crc kubenswrapper[4840]: I0129 12:31:25.295743 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:25 crc kubenswrapper[4840]: I0129 12:31:25.482022 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tzjnh"] Jan 29 12:31:27 crc kubenswrapper[4840]: I0129 12:31:27.236279 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tzjnh" podUID="8bdb2298-990f-4adb-a3d2-39f61163af57" containerName="registry-server" containerID="cri-o://c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec" gracePeriod=2 Jan 29 12:31:27 crc kubenswrapper[4840]: I0129 12:31:27.744176 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:27 crc kubenswrapper[4840]: I0129 12:31:27.888783 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5q8h7\" (UniqueName: \"kubernetes.io/projected/8bdb2298-990f-4adb-a3d2-39f61163af57-kube-api-access-5q8h7\") pod \"8bdb2298-990f-4adb-a3d2-39f61163af57\" (UID: \"8bdb2298-990f-4adb-a3d2-39f61163af57\") " Jan 29 12:31:27 crc kubenswrapper[4840]: I0129 12:31:27.888871 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bdb2298-990f-4adb-a3d2-39f61163af57-utilities\") pod \"8bdb2298-990f-4adb-a3d2-39f61163af57\" (UID: \"8bdb2298-990f-4adb-a3d2-39f61163af57\") " Jan 29 12:31:27 crc kubenswrapper[4840]: I0129 12:31:27.889044 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bdb2298-990f-4adb-a3d2-39f61163af57-catalog-content\") pod \"8bdb2298-990f-4adb-a3d2-39f61163af57\" (UID: \"8bdb2298-990f-4adb-a3d2-39f61163af57\") " Jan 29 12:31:27 crc kubenswrapper[4840]: I0129 12:31:27.889970 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bdb2298-990f-4adb-a3d2-39f61163af57-utilities" (OuterVolumeSpecName: "utilities") pod "8bdb2298-990f-4adb-a3d2-39f61163af57" (UID: "8bdb2298-990f-4adb-a3d2-39f61163af57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:31:27 crc kubenswrapper[4840]: I0129 12:31:27.896261 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bdb2298-990f-4adb-a3d2-39f61163af57-kube-api-access-5q8h7" (OuterVolumeSpecName: "kube-api-access-5q8h7") pod "8bdb2298-990f-4adb-a3d2-39f61163af57" (UID: "8bdb2298-990f-4adb-a3d2-39f61163af57"). InnerVolumeSpecName "kube-api-access-5q8h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:31:27 crc kubenswrapper[4840]: I0129 12:31:27.944330 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bdb2298-990f-4adb-a3d2-39f61163af57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8bdb2298-990f-4adb-a3d2-39f61163af57" (UID: "8bdb2298-990f-4adb-a3d2-39f61163af57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:31:27 crc kubenswrapper[4840]: I0129 12:31:27.990997 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bdb2298-990f-4adb-a3d2-39f61163af57-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:31:27 crc kubenswrapper[4840]: I0129 12:31:27.991035 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5q8h7\" (UniqueName: \"kubernetes.io/projected/8bdb2298-990f-4adb-a3d2-39f61163af57-kube-api-access-5q8h7\") on node \"crc\" DevicePath \"\"" Jan 29 12:31:27 crc kubenswrapper[4840]: I0129 12:31:27.991048 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bdb2298-990f-4adb-a3d2-39f61163af57-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.247583 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bdb2298-990f-4adb-a3d2-39f61163af57" containerID="c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec" exitCode=0 Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.247622 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzjnh" event={"ID":"8bdb2298-990f-4adb-a3d2-39f61163af57","Type":"ContainerDied","Data":"c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec"} Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.247663 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tzjnh" Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.247686 4840 scope.go:117] "RemoveContainer" containerID="c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec" Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.247669 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tzjnh" event={"ID":"8bdb2298-990f-4adb-a3d2-39f61163af57","Type":"ContainerDied","Data":"de405400caf6caf8aa1565e53a7802eef2a05e88ca7637953a5d6f43a23980ad"} Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.272191 4840 scope.go:117] "RemoveContainer" containerID="a60b2f3d09ffcbe2bbe65023f6e831491ed282922707510e308545583aca43ca" Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.283517 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tzjnh"] Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.292346 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tzjnh"] Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.301433 4840 scope.go:117] "RemoveContainer" containerID="9a49aea5736a9f4c662f3239d2d753c9dea4da3a1f0dd2b6a0cf9130ba2935d2" Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.325371 4840 scope.go:117] "RemoveContainer" containerID="c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec" Jan 29 12:31:28 crc kubenswrapper[4840]: E0129 12:31:28.326175 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec\": container with ID starting with c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec not found: ID does not exist" containerID="c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec" Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.326251 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec"} err="failed to get container status \"c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec\": rpc error: code = NotFound desc = could not find container \"c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec\": container with ID starting with c1ab5a63a592712ccc8bd8c846d937958c05da25b278a0df56766e291dba93ec not found: ID does not exist" Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.326331 4840 scope.go:117] "RemoveContainer" containerID="a60b2f3d09ffcbe2bbe65023f6e831491ed282922707510e308545583aca43ca" Jan 29 12:31:28 crc kubenswrapper[4840]: E0129 12:31:28.326730 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a60b2f3d09ffcbe2bbe65023f6e831491ed282922707510e308545583aca43ca\": container with ID starting with a60b2f3d09ffcbe2bbe65023f6e831491ed282922707510e308545583aca43ca not found: ID does not exist" containerID="a60b2f3d09ffcbe2bbe65023f6e831491ed282922707510e308545583aca43ca" Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.326771 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a60b2f3d09ffcbe2bbe65023f6e831491ed282922707510e308545583aca43ca"} err="failed to get container status \"a60b2f3d09ffcbe2bbe65023f6e831491ed282922707510e308545583aca43ca\": rpc error: code = NotFound desc = could not find container \"a60b2f3d09ffcbe2bbe65023f6e831491ed282922707510e308545583aca43ca\": container with ID starting with a60b2f3d09ffcbe2bbe65023f6e831491ed282922707510e308545583aca43ca not found: ID does not exist" Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.326815 4840 scope.go:117] "RemoveContainer" containerID="9a49aea5736a9f4c662f3239d2d753c9dea4da3a1f0dd2b6a0cf9130ba2935d2" Jan 29 12:31:28 crc kubenswrapper[4840]: E0129 12:31:28.327616 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a49aea5736a9f4c662f3239d2d753c9dea4da3a1f0dd2b6a0cf9130ba2935d2\": container with ID starting with 9a49aea5736a9f4c662f3239d2d753c9dea4da3a1f0dd2b6a0cf9130ba2935d2 not found: ID does not exist" containerID="9a49aea5736a9f4c662f3239d2d753c9dea4da3a1f0dd2b6a0cf9130ba2935d2" Jan 29 12:31:28 crc kubenswrapper[4840]: I0129 12:31:28.327688 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a49aea5736a9f4c662f3239d2d753c9dea4da3a1f0dd2b6a0cf9130ba2935d2"} err="failed to get container status \"9a49aea5736a9f4c662f3239d2d753c9dea4da3a1f0dd2b6a0cf9130ba2935d2\": rpc error: code = NotFound desc = could not find container \"9a49aea5736a9f4c662f3239d2d753c9dea4da3a1f0dd2b6a0cf9130ba2935d2\": container with ID starting with 9a49aea5736a9f4c662f3239d2d753c9dea4da3a1f0dd2b6a0cf9130ba2935d2 not found: ID does not exist" Jan 29 12:31:29 crc kubenswrapper[4840]: I0129 12:31:29.006595 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:31:29 crc kubenswrapper[4840]: E0129 12:31:29.007046 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:31:29 crc kubenswrapper[4840]: I0129 12:31:29.014034 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bdb2298-990f-4adb-a3d2-39f61163af57" path="/var/lib/kubelet/pods/8bdb2298-990f-4adb-a3d2-39f61163af57/volumes" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.679406 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2lr8n"] Jan 29 12:31:32 crc kubenswrapper[4840]: E0129 12:31:32.680557 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bdb2298-990f-4adb-a3d2-39f61163af57" containerName="extract-content" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.680582 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bdb2298-990f-4adb-a3d2-39f61163af57" containerName="extract-content" Jan 29 12:31:32 crc kubenswrapper[4840]: E0129 12:31:32.680601 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bdb2298-990f-4adb-a3d2-39f61163af57" containerName="extract-utilities" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.680613 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bdb2298-990f-4adb-a3d2-39f61163af57" containerName="extract-utilities" Jan 29 12:31:32 crc kubenswrapper[4840]: E0129 12:31:32.680635 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bdb2298-990f-4adb-a3d2-39f61163af57" containerName="registry-server" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.680646 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bdb2298-990f-4adb-a3d2-39f61163af57" containerName="registry-server" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.680917 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bdb2298-990f-4adb-a3d2-39f61163af57" containerName="registry-server" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.682698 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.695551 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2lr8n"] Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.865708 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jttzl\" (UniqueName: \"kubernetes.io/projected/e2aec6c4-ddfc-4557-8668-b92152a8292e-kube-api-access-jttzl\") pod \"community-operators-2lr8n\" (UID: \"e2aec6c4-ddfc-4557-8668-b92152a8292e\") " pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.865776 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2aec6c4-ddfc-4557-8668-b92152a8292e-catalog-content\") pod \"community-operators-2lr8n\" (UID: \"e2aec6c4-ddfc-4557-8668-b92152a8292e\") " pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.865809 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2aec6c4-ddfc-4557-8668-b92152a8292e-utilities\") pod \"community-operators-2lr8n\" (UID: \"e2aec6c4-ddfc-4557-8668-b92152a8292e\") " pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.967441 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jttzl\" (UniqueName: \"kubernetes.io/projected/e2aec6c4-ddfc-4557-8668-b92152a8292e-kube-api-access-jttzl\") pod \"community-operators-2lr8n\" (UID: \"e2aec6c4-ddfc-4557-8668-b92152a8292e\") " pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.967516 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2aec6c4-ddfc-4557-8668-b92152a8292e-catalog-content\") pod \"community-operators-2lr8n\" (UID: \"e2aec6c4-ddfc-4557-8668-b92152a8292e\") " pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.967545 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2aec6c4-ddfc-4557-8668-b92152a8292e-utilities\") pod \"community-operators-2lr8n\" (UID: \"e2aec6c4-ddfc-4557-8668-b92152a8292e\") " pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.968092 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2aec6c4-ddfc-4557-8668-b92152a8292e-utilities\") pod \"community-operators-2lr8n\" (UID: \"e2aec6c4-ddfc-4557-8668-b92152a8292e\") " pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.968462 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2aec6c4-ddfc-4557-8668-b92152a8292e-catalog-content\") pod \"community-operators-2lr8n\" (UID: \"e2aec6c4-ddfc-4557-8668-b92152a8292e\") " pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:32 crc kubenswrapper[4840]: I0129 12:31:32.991937 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jttzl\" (UniqueName: \"kubernetes.io/projected/e2aec6c4-ddfc-4557-8668-b92152a8292e-kube-api-access-jttzl\") pod \"community-operators-2lr8n\" (UID: \"e2aec6c4-ddfc-4557-8668-b92152a8292e\") " pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:33 crc kubenswrapper[4840]: I0129 12:31:33.009153 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:33 crc kubenswrapper[4840]: I0129 12:31:33.530069 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2lr8n"] Jan 29 12:31:34 crc kubenswrapper[4840]: I0129 12:31:34.300458 4840 generic.go:334] "Generic (PLEG): container finished" podID="e2aec6c4-ddfc-4557-8668-b92152a8292e" containerID="a45df516bab45a4311e9806c5c1c0a7d3afa3911b6b1f438f54c7c0364f3c096" exitCode=0 Jan 29 12:31:34 crc kubenswrapper[4840]: I0129 12:31:34.300572 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lr8n" event={"ID":"e2aec6c4-ddfc-4557-8668-b92152a8292e","Type":"ContainerDied","Data":"a45df516bab45a4311e9806c5c1c0a7d3afa3911b6b1f438f54c7c0364f3c096"} Jan 29 12:31:34 crc kubenswrapper[4840]: I0129 12:31:34.300978 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lr8n" event={"ID":"e2aec6c4-ddfc-4557-8668-b92152a8292e","Type":"ContainerStarted","Data":"fd09575f7f171650a65f1ca39284846a660810e3081d1ab4e3974d1e15ad93dc"} Jan 29 12:31:36 crc kubenswrapper[4840]: I0129 12:31:36.320045 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lr8n" event={"ID":"e2aec6c4-ddfc-4557-8668-b92152a8292e","Type":"ContainerStarted","Data":"6da65130b42ee9e7c5913edec82ed5876009d6241d73210b3c614be367b3688e"} Jan 29 12:31:37 crc kubenswrapper[4840]: I0129 12:31:37.333881 4840 generic.go:334] "Generic (PLEG): container finished" podID="e2aec6c4-ddfc-4557-8668-b92152a8292e" containerID="6da65130b42ee9e7c5913edec82ed5876009d6241d73210b3c614be367b3688e" exitCode=0 Jan 29 12:31:37 crc kubenswrapper[4840]: I0129 12:31:37.334152 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lr8n" event={"ID":"e2aec6c4-ddfc-4557-8668-b92152a8292e","Type":"ContainerDied","Data":"6da65130b42ee9e7c5913edec82ed5876009d6241d73210b3c614be367b3688e"} Jan 29 12:31:40 crc kubenswrapper[4840]: I0129 12:31:40.001917 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:31:40 crc kubenswrapper[4840]: E0129 12:31:40.002571 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:31:41 crc kubenswrapper[4840]: I0129 12:31:41.369496 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lr8n" event={"ID":"e2aec6c4-ddfc-4557-8668-b92152a8292e","Type":"ContainerStarted","Data":"fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd"} Jan 29 12:31:41 crc kubenswrapper[4840]: I0129 12:31:41.391706 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2lr8n" podStartSLOduration=2.905574115 podStartE2EDuration="9.391680178s" podCreationTimestamp="2026-01-29 12:31:32 +0000 UTC" firstStartedPulling="2026-01-29 12:31:34.303024157 +0000 UTC m=+1625.966004090" lastFinishedPulling="2026-01-29 12:31:40.78913024 +0000 UTC m=+1632.452110153" observedRunningTime="2026-01-29 12:31:41.388216244 +0000 UTC m=+1633.051196147" watchObservedRunningTime="2026-01-29 12:31:41.391680178 +0000 UTC m=+1633.054660071" Jan 29 12:31:43 crc kubenswrapper[4840]: I0129 12:31:43.009892 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:43 crc kubenswrapper[4840]: I0129 12:31:43.011061 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:44 crc kubenswrapper[4840]: I0129 12:31:44.049786 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-2lr8n" podUID="e2aec6c4-ddfc-4557-8668-b92152a8292e" containerName="registry-server" probeResult="failure" output=< Jan 29 12:31:44 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Jan 29 12:31:44 crc kubenswrapper[4840]: > Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.518107 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p2p76"] Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.519981 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.535479 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2p76"] Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.693826 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwt8v\" (UniqueName: \"kubernetes.io/projected/140086e5-966c-4b77-8c12-76dfbf63a2c0-kube-api-access-nwt8v\") pod \"redhat-marketplace-p2p76\" (UID: \"140086e5-966c-4b77-8c12-76dfbf63a2c0\") " pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.694014 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140086e5-966c-4b77-8c12-76dfbf63a2c0-utilities\") pod \"redhat-marketplace-p2p76\" (UID: \"140086e5-966c-4b77-8c12-76dfbf63a2c0\") " pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.694568 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140086e5-966c-4b77-8c12-76dfbf63a2c0-catalog-content\") pod \"redhat-marketplace-p2p76\" (UID: \"140086e5-966c-4b77-8c12-76dfbf63a2c0\") " pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.796183 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140086e5-966c-4b77-8c12-76dfbf63a2c0-utilities\") pod \"redhat-marketplace-p2p76\" (UID: \"140086e5-966c-4b77-8c12-76dfbf63a2c0\") " pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.796258 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140086e5-966c-4b77-8c12-76dfbf63a2c0-catalog-content\") pod \"redhat-marketplace-p2p76\" (UID: \"140086e5-966c-4b77-8c12-76dfbf63a2c0\") " pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.796352 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwt8v\" (UniqueName: \"kubernetes.io/projected/140086e5-966c-4b77-8c12-76dfbf63a2c0-kube-api-access-nwt8v\") pod \"redhat-marketplace-p2p76\" (UID: \"140086e5-966c-4b77-8c12-76dfbf63a2c0\") " pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.796853 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140086e5-966c-4b77-8c12-76dfbf63a2c0-utilities\") pod \"redhat-marketplace-p2p76\" (UID: \"140086e5-966c-4b77-8c12-76dfbf63a2c0\") " pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.796913 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140086e5-966c-4b77-8c12-76dfbf63a2c0-catalog-content\") pod \"redhat-marketplace-p2p76\" (UID: \"140086e5-966c-4b77-8c12-76dfbf63a2c0\") " pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.818136 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwt8v\" (UniqueName: \"kubernetes.io/projected/140086e5-966c-4b77-8c12-76dfbf63a2c0-kube-api-access-nwt8v\") pod \"redhat-marketplace-p2p76\" (UID: \"140086e5-966c-4b77-8c12-76dfbf63a2c0\") " pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:47 crc kubenswrapper[4840]: I0129 12:31:47.838743 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:48 crc kubenswrapper[4840]: I0129 12:31:48.284124 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2p76"] Jan 29 12:31:48 crc kubenswrapper[4840]: I0129 12:31:48.424914 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2p76" event={"ID":"140086e5-966c-4b77-8c12-76dfbf63a2c0","Type":"ContainerStarted","Data":"24b27ec2fa224c5d715db7644413b603c47f441ced34eb2f6cec7e6a333e7f0a"} Jan 29 12:31:49 crc kubenswrapper[4840]: I0129 12:31:49.435761 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2p76" event={"ID":"140086e5-966c-4b77-8c12-76dfbf63a2c0","Type":"ContainerStarted","Data":"e69c4ea520feda5c266786bcbd74b13054b8bda8b2f441f6fd11439d659d87da"} Jan 29 12:31:50 crc kubenswrapper[4840]: I0129 12:31:50.443410 4840 generic.go:334] "Generic (PLEG): container finished" podID="140086e5-966c-4b77-8c12-76dfbf63a2c0" containerID="e69c4ea520feda5c266786bcbd74b13054b8bda8b2f441f6fd11439d659d87da" exitCode=0 Jan 29 12:31:50 crc kubenswrapper[4840]: I0129 12:31:50.443454 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2p76" event={"ID":"140086e5-966c-4b77-8c12-76dfbf63a2c0","Type":"ContainerDied","Data":"e69c4ea520feda5c266786bcbd74b13054b8bda8b2f441f6fd11439d659d87da"} Jan 29 12:31:52 crc kubenswrapper[4840]: I0129 12:31:52.462016 4840 generic.go:334] "Generic (PLEG): container finished" podID="140086e5-966c-4b77-8c12-76dfbf63a2c0" containerID="eeff4caa8e1d109af90d7d0716bd05ba53a13956192f05c7a52ef359dbbad9fd" exitCode=0 Jan 29 12:31:52 crc kubenswrapper[4840]: I0129 12:31:52.462063 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2p76" event={"ID":"140086e5-966c-4b77-8c12-76dfbf63a2c0","Type":"ContainerDied","Data":"eeff4caa8e1d109af90d7d0716bd05ba53a13956192f05c7a52ef359dbbad9fd"} Jan 29 12:31:53 crc kubenswrapper[4840]: I0129 12:31:53.001387 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:31:53 crc kubenswrapper[4840]: E0129 12:31:53.001688 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:31:53 crc kubenswrapper[4840]: I0129 12:31:53.060732 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:53 crc kubenswrapper[4840]: I0129 12:31:53.103339 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:53 crc kubenswrapper[4840]: I0129 12:31:53.473389 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2p76" event={"ID":"140086e5-966c-4b77-8c12-76dfbf63a2c0","Type":"ContainerStarted","Data":"2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c"} Jan 29 12:31:53 crc kubenswrapper[4840]: I0129 12:31:53.495409 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p2p76" podStartSLOduration=3.771263657 podStartE2EDuration="6.495390782s" podCreationTimestamp="2026-01-29 12:31:47 +0000 UTC" firstStartedPulling="2026-01-29 12:31:50.444647801 +0000 UTC m=+1642.107627684" lastFinishedPulling="2026-01-29 12:31:53.168774916 +0000 UTC m=+1644.831754809" observedRunningTime="2026-01-29 12:31:53.489090741 +0000 UTC m=+1645.152070644" watchObservedRunningTime="2026-01-29 12:31:53.495390782 +0000 UTC m=+1645.158370675" Jan 29 12:31:54 crc kubenswrapper[4840]: I0129 12:31:54.091586 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2lr8n"] Jan 29 12:31:54 crc kubenswrapper[4840]: I0129 12:31:54.480217 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2lr8n" podUID="e2aec6c4-ddfc-4557-8668-b92152a8292e" containerName="registry-server" containerID="cri-o://fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd" gracePeriod=2 Jan 29 12:31:54 crc kubenswrapper[4840]: I0129 12:31:54.930383 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.098719 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2aec6c4-ddfc-4557-8668-b92152a8292e-utilities\") pod \"e2aec6c4-ddfc-4557-8668-b92152a8292e\" (UID: \"e2aec6c4-ddfc-4557-8668-b92152a8292e\") " Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.098776 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2aec6c4-ddfc-4557-8668-b92152a8292e-catalog-content\") pod \"e2aec6c4-ddfc-4557-8668-b92152a8292e\" (UID: \"e2aec6c4-ddfc-4557-8668-b92152a8292e\") " Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.098847 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jttzl\" (UniqueName: \"kubernetes.io/projected/e2aec6c4-ddfc-4557-8668-b92152a8292e-kube-api-access-jttzl\") pod \"e2aec6c4-ddfc-4557-8668-b92152a8292e\" (UID: \"e2aec6c4-ddfc-4557-8668-b92152a8292e\") " Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.101303 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2aec6c4-ddfc-4557-8668-b92152a8292e-utilities" (OuterVolumeSpecName: "utilities") pod "e2aec6c4-ddfc-4557-8668-b92152a8292e" (UID: "e2aec6c4-ddfc-4557-8668-b92152a8292e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.109324 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2aec6c4-ddfc-4557-8668-b92152a8292e-kube-api-access-jttzl" (OuterVolumeSpecName: "kube-api-access-jttzl") pod "e2aec6c4-ddfc-4557-8668-b92152a8292e" (UID: "e2aec6c4-ddfc-4557-8668-b92152a8292e"). InnerVolumeSpecName "kube-api-access-jttzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.152717 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2aec6c4-ddfc-4557-8668-b92152a8292e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2aec6c4-ddfc-4557-8668-b92152a8292e" (UID: "e2aec6c4-ddfc-4557-8668-b92152a8292e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.201145 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2aec6c4-ddfc-4557-8668-b92152a8292e-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.201184 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2aec6c4-ddfc-4557-8668-b92152a8292e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.201200 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jttzl\" (UniqueName: \"kubernetes.io/projected/e2aec6c4-ddfc-4557-8668-b92152a8292e-kube-api-access-jttzl\") on node \"crc\" DevicePath \"\"" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.488716 4840 generic.go:334] "Generic (PLEG): container finished" podID="e2aec6c4-ddfc-4557-8668-b92152a8292e" containerID="fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd" exitCode=0 Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.488765 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lr8n" event={"ID":"e2aec6c4-ddfc-4557-8668-b92152a8292e","Type":"ContainerDied","Data":"fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd"} Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.488824 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2lr8n" event={"ID":"e2aec6c4-ddfc-4557-8668-b92152a8292e","Type":"ContainerDied","Data":"fd09575f7f171650a65f1ca39284846a660810e3081d1ab4e3974d1e15ad93dc"} Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.488848 4840 scope.go:117] "RemoveContainer" containerID="fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.488863 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2lr8n" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.508599 4840 scope.go:117] "RemoveContainer" containerID="6da65130b42ee9e7c5913edec82ed5876009d6241d73210b3c614be367b3688e" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.529587 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2lr8n"] Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.536685 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2lr8n"] Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.549628 4840 scope.go:117] "RemoveContainer" containerID="a45df516bab45a4311e9806c5c1c0a7d3afa3911b6b1f438f54c7c0364f3c096" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.565872 4840 scope.go:117] "RemoveContainer" containerID="fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd" Jan 29 12:31:55 crc kubenswrapper[4840]: E0129 12:31:55.566570 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd\": container with ID starting with fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd not found: ID does not exist" containerID="fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.566709 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd"} err="failed to get container status \"fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd\": rpc error: code = NotFound desc = could not find container \"fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd\": container with ID starting with fe743ed4b655e60d865a04f7592a4f98d8f06bb7c09cfd0bc4bd1ed3534579dd not found: ID does not exist" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.566750 4840 scope.go:117] "RemoveContainer" containerID="6da65130b42ee9e7c5913edec82ed5876009d6241d73210b3c614be367b3688e" Jan 29 12:31:55 crc kubenswrapper[4840]: E0129 12:31:55.567577 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6da65130b42ee9e7c5913edec82ed5876009d6241d73210b3c614be367b3688e\": container with ID starting with 6da65130b42ee9e7c5913edec82ed5876009d6241d73210b3c614be367b3688e not found: ID does not exist" containerID="6da65130b42ee9e7c5913edec82ed5876009d6241d73210b3c614be367b3688e" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.567621 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6da65130b42ee9e7c5913edec82ed5876009d6241d73210b3c614be367b3688e"} err="failed to get container status \"6da65130b42ee9e7c5913edec82ed5876009d6241d73210b3c614be367b3688e\": rpc error: code = NotFound desc = could not find container \"6da65130b42ee9e7c5913edec82ed5876009d6241d73210b3c614be367b3688e\": container with ID starting with 6da65130b42ee9e7c5913edec82ed5876009d6241d73210b3c614be367b3688e not found: ID does not exist" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.567656 4840 scope.go:117] "RemoveContainer" containerID="a45df516bab45a4311e9806c5c1c0a7d3afa3911b6b1f438f54c7c0364f3c096" Jan 29 12:31:55 crc kubenswrapper[4840]: E0129 12:31:55.568026 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a45df516bab45a4311e9806c5c1c0a7d3afa3911b6b1f438f54c7c0364f3c096\": container with ID starting with a45df516bab45a4311e9806c5c1c0a7d3afa3911b6b1f438f54c7c0364f3c096 not found: ID does not exist" containerID="a45df516bab45a4311e9806c5c1c0a7d3afa3911b6b1f438f54c7c0364f3c096" Jan 29 12:31:55 crc kubenswrapper[4840]: I0129 12:31:55.568073 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a45df516bab45a4311e9806c5c1c0a7d3afa3911b6b1f438f54c7c0364f3c096"} err="failed to get container status \"a45df516bab45a4311e9806c5c1c0a7d3afa3911b6b1f438f54c7c0364f3c096\": rpc error: code = NotFound desc = could not find container \"a45df516bab45a4311e9806c5c1c0a7d3afa3911b6b1f438f54c7c0364f3c096\": container with ID starting with a45df516bab45a4311e9806c5c1c0a7d3afa3911b6b1f438f54c7c0364f3c096 not found: ID does not exist" Jan 29 12:31:57 crc kubenswrapper[4840]: I0129 12:31:57.013457 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2aec6c4-ddfc-4557-8668-b92152a8292e" path="/var/lib/kubelet/pods/e2aec6c4-ddfc-4557-8668-b92152a8292e/volumes" Jan 29 12:31:57 crc kubenswrapper[4840]: I0129 12:31:57.838902 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:57 crc kubenswrapper[4840]: I0129 12:31:57.838967 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:57 crc kubenswrapper[4840]: I0129 12:31:57.889469 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:58 crc kubenswrapper[4840]: I0129 12:31:58.562778 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:31:59 crc kubenswrapper[4840]: I0129 12:31:59.090162 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2p76"] Jan 29 12:32:00 crc kubenswrapper[4840]: I0129 12:32:00.525850 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p2p76" podUID="140086e5-966c-4b77-8c12-76dfbf63a2c0" containerName="registry-server" containerID="cri-o://2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c" gracePeriod=2 Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.294186 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.403334 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140086e5-966c-4b77-8c12-76dfbf63a2c0-utilities\") pod \"140086e5-966c-4b77-8c12-76dfbf63a2c0\" (UID: \"140086e5-966c-4b77-8c12-76dfbf63a2c0\") " Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.403386 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140086e5-966c-4b77-8c12-76dfbf63a2c0-catalog-content\") pod \"140086e5-966c-4b77-8c12-76dfbf63a2c0\" (UID: \"140086e5-966c-4b77-8c12-76dfbf63a2c0\") " Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.403418 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nwt8v\" (UniqueName: \"kubernetes.io/projected/140086e5-966c-4b77-8c12-76dfbf63a2c0-kube-api-access-nwt8v\") pod \"140086e5-966c-4b77-8c12-76dfbf63a2c0\" (UID: \"140086e5-966c-4b77-8c12-76dfbf63a2c0\") " Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.404855 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/140086e5-966c-4b77-8c12-76dfbf63a2c0-utilities" (OuterVolumeSpecName: "utilities") pod "140086e5-966c-4b77-8c12-76dfbf63a2c0" (UID: "140086e5-966c-4b77-8c12-76dfbf63a2c0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.410150 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/140086e5-966c-4b77-8c12-76dfbf63a2c0-kube-api-access-nwt8v" (OuterVolumeSpecName: "kube-api-access-nwt8v") pod "140086e5-966c-4b77-8c12-76dfbf63a2c0" (UID: "140086e5-966c-4b77-8c12-76dfbf63a2c0"). InnerVolumeSpecName "kube-api-access-nwt8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.431257 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/140086e5-966c-4b77-8c12-76dfbf63a2c0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "140086e5-966c-4b77-8c12-76dfbf63a2c0" (UID: "140086e5-966c-4b77-8c12-76dfbf63a2c0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.504712 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/140086e5-966c-4b77-8c12-76dfbf63a2c0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.504748 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/140086e5-966c-4b77-8c12-76dfbf63a2c0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.504760 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nwt8v\" (UniqueName: \"kubernetes.io/projected/140086e5-966c-4b77-8c12-76dfbf63a2c0-kube-api-access-nwt8v\") on node \"crc\" DevicePath \"\"" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.547183 4840 generic.go:334] "Generic (PLEG): container finished" podID="140086e5-966c-4b77-8c12-76dfbf63a2c0" containerID="2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c" exitCode=0 Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.547235 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2p76" event={"ID":"140086e5-966c-4b77-8c12-76dfbf63a2c0","Type":"ContainerDied","Data":"2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c"} Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.547272 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p2p76" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.547334 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p2p76" event={"ID":"140086e5-966c-4b77-8c12-76dfbf63a2c0","Type":"ContainerDied","Data":"24b27ec2fa224c5d715db7644413b603c47f441ced34eb2f6cec7e6a333e7f0a"} Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.547367 4840 scope.go:117] "RemoveContainer" containerID="2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.569026 4840 scope.go:117] "RemoveContainer" containerID="eeff4caa8e1d109af90d7d0716bd05ba53a13956192f05c7a52ef359dbbad9fd" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.585270 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2p76"] Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.593837 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p2p76"] Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.609158 4840 scope.go:117] "RemoveContainer" containerID="e69c4ea520feda5c266786bcbd74b13054b8bda8b2f441f6fd11439d659d87da" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.632265 4840 scope.go:117] "RemoveContainer" containerID="2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c" Jan 29 12:32:02 crc kubenswrapper[4840]: E0129 12:32:02.632726 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c\": container with ID starting with 2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c not found: ID does not exist" containerID="2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.632782 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c"} err="failed to get container status \"2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c\": rpc error: code = NotFound desc = could not find container \"2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c\": container with ID starting with 2f4c1ced5cb8578b177c1c669342924f7c0b641be5d338c3c70e13ab7aee9a3c not found: ID does not exist" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.632829 4840 scope.go:117] "RemoveContainer" containerID="eeff4caa8e1d109af90d7d0716bd05ba53a13956192f05c7a52ef359dbbad9fd" Jan 29 12:32:02 crc kubenswrapper[4840]: E0129 12:32:02.633353 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeff4caa8e1d109af90d7d0716bd05ba53a13956192f05c7a52ef359dbbad9fd\": container with ID starting with eeff4caa8e1d109af90d7d0716bd05ba53a13956192f05c7a52ef359dbbad9fd not found: ID does not exist" containerID="eeff4caa8e1d109af90d7d0716bd05ba53a13956192f05c7a52ef359dbbad9fd" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.633382 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeff4caa8e1d109af90d7d0716bd05ba53a13956192f05c7a52ef359dbbad9fd"} err="failed to get container status \"eeff4caa8e1d109af90d7d0716bd05ba53a13956192f05c7a52ef359dbbad9fd\": rpc error: code = NotFound desc = could not find container \"eeff4caa8e1d109af90d7d0716bd05ba53a13956192f05c7a52ef359dbbad9fd\": container with ID starting with eeff4caa8e1d109af90d7d0716bd05ba53a13956192f05c7a52ef359dbbad9fd not found: ID does not exist" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.633406 4840 scope.go:117] "RemoveContainer" containerID="e69c4ea520feda5c266786bcbd74b13054b8bda8b2f441f6fd11439d659d87da" Jan 29 12:32:02 crc kubenswrapper[4840]: E0129 12:32:02.633685 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e69c4ea520feda5c266786bcbd74b13054b8bda8b2f441f6fd11439d659d87da\": container with ID starting with e69c4ea520feda5c266786bcbd74b13054b8bda8b2f441f6fd11439d659d87da not found: ID does not exist" containerID="e69c4ea520feda5c266786bcbd74b13054b8bda8b2f441f6fd11439d659d87da" Jan 29 12:32:02 crc kubenswrapper[4840]: I0129 12:32:02.633730 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e69c4ea520feda5c266786bcbd74b13054b8bda8b2f441f6fd11439d659d87da"} err="failed to get container status \"e69c4ea520feda5c266786bcbd74b13054b8bda8b2f441f6fd11439d659d87da\": rpc error: code = NotFound desc = could not find container \"e69c4ea520feda5c266786bcbd74b13054b8bda8b2f441f6fd11439d659d87da\": container with ID starting with e69c4ea520feda5c266786bcbd74b13054b8bda8b2f441f6fd11439d659d87da not found: ID does not exist" Jan 29 12:32:03 crc kubenswrapper[4840]: I0129 12:32:03.011272 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="140086e5-966c-4b77-8c12-76dfbf63a2c0" path="/var/lib/kubelet/pods/140086e5-966c-4b77-8c12-76dfbf63a2c0/volumes" Jan 29 12:32:08 crc kubenswrapper[4840]: I0129 12:32:08.002286 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:32:08 crc kubenswrapper[4840]: E0129 12:32:08.003546 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:32:21 crc kubenswrapper[4840]: I0129 12:32:21.002170 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:32:21 crc kubenswrapper[4840]: E0129 12:32:21.004180 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:32:33 crc kubenswrapper[4840]: I0129 12:32:33.001808 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:32:33 crc kubenswrapper[4840]: E0129 12:32:33.002679 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:32:48 crc kubenswrapper[4840]: I0129 12:32:48.002066 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:32:48 crc kubenswrapper[4840]: E0129 12:32:48.002822 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:33:00 crc kubenswrapper[4840]: I0129 12:33:00.002037 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:33:00 crc kubenswrapper[4840]: E0129 12:33:00.002922 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:33:15 crc kubenswrapper[4840]: I0129 12:33:15.001229 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:33:15 crc kubenswrapper[4840]: E0129 12:33:15.002140 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:33:27 crc kubenswrapper[4840]: I0129 12:33:27.001893 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:33:27 crc kubenswrapper[4840]: E0129 12:33:27.003178 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:33:42 crc kubenswrapper[4840]: I0129 12:33:42.001009 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:33:42 crc kubenswrapper[4840]: E0129 12:33:42.001728 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:33:58 crc kubenswrapper[4840]: I0129 12:33:58.001590 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:33:58 crc kubenswrapper[4840]: E0129 12:33:58.002363 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:34:11 crc kubenswrapper[4840]: I0129 12:34:11.001677 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:34:11 crc kubenswrapper[4840]: E0129 12:34:11.002990 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:34:23 crc kubenswrapper[4840]: I0129 12:34:23.007041 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:34:23 crc kubenswrapper[4840]: E0129 12:34:23.007950 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:34:38 crc kubenswrapper[4840]: I0129 12:34:38.001467 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:34:38 crc kubenswrapper[4840]: E0129 12:34:38.002250 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:34:50 crc kubenswrapper[4840]: I0129 12:34:50.002583 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:34:50 crc kubenswrapper[4840]: E0129 12:34:50.003813 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:35:05 crc kubenswrapper[4840]: I0129 12:35:05.001196 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:35:05 crc kubenswrapper[4840]: E0129 12:35:05.002124 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:35:16 crc kubenswrapper[4840]: I0129 12:35:16.002092 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:35:16 crc kubenswrapper[4840]: E0129 12:35:16.002918 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:35:28 crc kubenswrapper[4840]: I0129 12:35:28.001735 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:35:28 crc kubenswrapper[4840]: E0129 12:35:28.003104 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:35:39 crc kubenswrapper[4840]: I0129 12:35:39.007371 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:35:39 crc kubenswrapper[4840]: E0129 12:35:39.008605 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:35:51 crc kubenswrapper[4840]: I0129 12:35:51.001692 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:35:51 crc kubenswrapper[4840]: E0129 12:35:51.015159 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:36:04 crc kubenswrapper[4840]: I0129 12:36:04.001598 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:36:04 crc kubenswrapper[4840]: I0129 12:36:04.638756 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"fa25ef95e49b0a7c454bf39ea1dd39dfa91fc9e8ce01036359a977f3eaac842d"} Jan 29 12:38:23 crc kubenswrapper[4840]: I0129 12:38:23.521934 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:38:23 crc kubenswrapper[4840]: I0129 12:38:23.522856 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:38:53 crc kubenswrapper[4840]: I0129 12:38:53.522205 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:38:53 crc kubenswrapper[4840]: I0129 12:38:53.524753 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.586423 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5rthh"] Jan 29 12:39:26 crc kubenswrapper[4840]: E0129 12:39:21.588345 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140086e5-966c-4b77-8c12-76dfbf63a2c0" containerName="extract-content" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.588413 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="140086e5-966c-4b77-8c12-76dfbf63a2c0" containerName="extract-content" Jan 29 12:39:26 crc kubenswrapper[4840]: E0129 12:39:21.588446 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2aec6c4-ddfc-4557-8668-b92152a8292e" containerName="extract-utilities" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.588498 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2aec6c4-ddfc-4557-8668-b92152a8292e" containerName="extract-utilities" Jan 29 12:39:26 crc kubenswrapper[4840]: E0129 12:39:21.588542 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2aec6c4-ddfc-4557-8668-b92152a8292e" containerName="registry-server" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.588594 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2aec6c4-ddfc-4557-8668-b92152a8292e" containerName="registry-server" Jan 29 12:39:26 crc kubenswrapper[4840]: E0129 12:39:21.588615 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2aec6c4-ddfc-4557-8668-b92152a8292e" containerName="extract-content" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.588627 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2aec6c4-ddfc-4557-8668-b92152a8292e" containerName="extract-content" Jan 29 12:39:26 crc kubenswrapper[4840]: E0129 12:39:21.588703 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140086e5-966c-4b77-8c12-76dfbf63a2c0" containerName="registry-server" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.588718 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="140086e5-966c-4b77-8c12-76dfbf63a2c0" containerName="registry-server" Jan 29 12:39:26 crc kubenswrapper[4840]: E0129 12:39:21.588733 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="140086e5-966c-4b77-8c12-76dfbf63a2c0" containerName="extract-utilities" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.588785 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="140086e5-966c-4b77-8c12-76dfbf63a2c0" containerName="extract-utilities" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.589742 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="140086e5-966c-4b77-8c12-76dfbf63a2c0" containerName="registry-server" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.589796 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2aec6c4-ddfc-4557-8668-b92152a8292e" containerName="registry-server" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.594509 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.598695 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5rthh"] Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.756123 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl858\" (UniqueName: \"kubernetes.io/projected/7d5eef78-019e-4d19-b57c-df0e945ea2dc-kube-api-access-zl858\") pod \"redhat-operators-5rthh\" (UID: \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\") " pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.756199 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5eef78-019e-4d19-b57c-df0e945ea2dc-catalog-content\") pod \"redhat-operators-5rthh\" (UID: \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\") " pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.756288 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5eef78-019e-4d19-b57c-df0e945ea2dc-utilities\") pod \"redhat-operators-5rthh\" (UID: \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\") " pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.858171 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zl858\" (UniqueName: \"kubernetes.io/projected/7d5eef78-019e-4d19-b57c-df0e945ea2dc-kube-api-access-zl858\") pod \"redhat-operators-5rthh\" (UID: \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\") " pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.858255 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5eef78-019e-4d19-b57c-df0e945ea2dc-catalog-content\") pod \"redhat-operators-5rthh\" (UID: \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\") " pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.858321 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5eef78-019e-4d19-b57c-df0e945ea2dc-utilities\") pod \"redhat-operators-5rthh\" (UID: \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\") " pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.858828 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5eef78-019e-4d19-b57c-df0e945ea2dc-catalog-content\") pod \"redhat-operators-5rthh\" (UID: \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\") " pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.859003 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5eef78-019e-4d19-b57c-df0e945ea2dc-utilities\") pod \"redhat-operators-5rthh\" (UID: \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\") " pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.881002 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zl858\" (UniqueName: \"kubernetes.io/projected/7d5eef78-019e-4d19-b57c-df0e945ea2dc-kube-api-access-zl858\") pod \"redhat-operators-5rthh\" (UID: \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\") " pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:21.965662 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:23.521882 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:23.522495 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:23.522591 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:23.523898 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fa25ef95e49b0a7c454bf39ea1dd39dfa91fc9e8ce01036359a977f3eaac842d"} pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:23.524007 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" containerID="cri-o://fa25ef95e49b0a7c454bf39ea1dd39dfa91fc9e8ce01036359a977f3eaac842d" gracePeriod=600 Jan 29 12:39:26 crc kubenswrapper[4840]: I0129 12:39:26.287076 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5rthh"] Jan 29 12:39:27 crc kubenswrapper[4840]: I0129 12:39:27.163611 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerID="fa25ef95e49b0a7c454bf39ea1dd39dfa91fc9e8ce01036359a977f3eaac842d" exitCode=0 Jan 29 12:39:27 crc kubenswrapper[4840]: I0129 12:39:27.163685 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerDied","Data":"fa25ef95e49b0a7c454bf39ea1dd39dfa91fc9e8ce01036359a977f3eaac842d"} Jan 29 12:39:27 crc kubenswrapper[4840]: I0129 12:39:27.164253 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c"} Jan 29 12:39:27 crc kubenswrapper[4840]: I0129 12:39:27.164279 4840 scope.go:117] "RemoveContainer" containerID="e2e31afee895073925e5483c7c205928cf1d1f81a800193cf346eebf364b5ce4" Jan 29 12:39:27 crc kubenswrapper[4840]: I0129 12:39:27.167269 4840 generic.go:334] "Generic (PLEG): container finished" podID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" containerID="58d6d6cef4acfab8dc9e2be35d38977d5cc47835a3ff48f2d6b2b6cdca88497d" exitCode=0 Jan 29 12:39:27 crc kubenswrapper[4840]: I0129 12:39:27.167531 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rthh" event={"ID":"7d5eef78-019e-4d19-b57c-df0e945ea2dc","Type":"ContainerDied","Data":"58d6d6cef4acfab8dc9e2be35d38977d5cc47835a3ff48f2d6b2b6cdca88497d"} Jan 29 12:39:27 crc kubenswrapper[4840]: I0129 12:39:27.167739 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rthh" event={"ID":"7d5eef78-019e-4d19-b57c-df0e945ea2dc","Type":"ContainerStarted","Data":"010aa576ac7a1c93b03bceb505117801ed2c37b6e873831cb123e54e73aeecba"} Jan 29 12:39:27 crc kubenswrapper[4840]: I0129 12:39:27.170701 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:39:29 crc kubenswrapper[4840]: I0129 12:39:29.196478 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rthh" event={"ID":"7d5eef78-019e-4d19-b57c-df0e945ea2dc","Type":"ContainerStarted","Data":"0080a6bd305e8f8b97169cd5b051f03efe0840767c0bb32df697194a40e7dc16"} Jan 29 12:39:30 crc kubenswrapper[4840]: I0129 12:39:30.208044 4840 generic.go:334] "Generic (PLEG): container finished" podID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" containerID="0080a6bd305e8f8b97169cd5b051f03efe0840767c0bb32df697194a40e7dc16" exitCode=0 Jan 29 12:39:30 crc kubenswrapper[4840]: I0129 12:39:30.208105 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rthh" event={"ID":"7d5eef78-019e-4d19-b57c-df0e945ea2dc","Type":"ContainerDied","Data":"0080a6bd305e8f8b97169cd5b051f03efe0840767c0bb32df697194a40e7dc16"} Jan 29 12:39:31 crc kubenswrapper[4840]: I0129 12:39:31.223181 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rthh" event={"ID":"7d5eef78-019e-4d19-b57c-df0e945ea2dc","Type":"ContainerStarted","Data":"075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb"} Jan 29 12:39:31 crc kubenswrapper[4840]: I0129 12:39:31.247057 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5rthh" podStartSLOduration=6.811786023 podStartE2EDuration="10.247036612s" podCreationTimestamp="2026-01-29 12:39:21 +0000 UTC" firstStartedPulling="2026-01-29 12:39:27.170468694 +0000 UTC m=+2098.833448587" lastFinishedPulling="2026-01-29 12:39:30.605719283 +0000 UTC m=+2102.268699176" observedRunningTime="2026-01-29 12:39:31.24476324 +0000 UTC m=+2102.907743133" watchObservedRunningTime="2026-01-29 12:39:31.247036612 +0000 UTC m=+2102.910016505" Jan 29 12:39:31 crc kubenswrapper[4840]: I0129 12:39:31.965853 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:31 crc kubenswrapper[4840]: I0129 12:39:31.966380 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:33 crc kubenswrapper[4840]: I0129 12:39:33.019705 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5rthh" podUID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" containerName="registry-server" probeResult="failure" output=< Jan 29 12:39:33 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Jan 29 12:39:33 crc kubenswrapper[4840]: > Jan 29 12:39:42 crc kubenswrapper[4840]: I0129 12:39:42.032282 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:42 crc kubenswrapper[4840]: I0129 12:39:42.080482 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:42 crc kubenswrapper[4840]: I0129 12:39:42.271606 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5rthh"] Jan 29 12:39:43 crc kubenswrapper[4840]: I0129 12:39:43.318412 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5rthh" podUID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" containerName="registry-server" containerID="cri-o://075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb" gracePeriod=2 Jan 29 12:39:43 crc kubenswrapper[4840]: I0129 12:39:43.760656 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:43 crc kubenswrapper[4840]: I0129 12:39:43.809809 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5eef78-019e-4d19-b57c-df0e945ea2dc-catalog-content\") pod \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\" (UID: \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\") " Jan 29 12:39:43 crc kubenswrapper[4840]: I0129 12:39:43.809858 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5eef78-019e-4d19-b57c-df0e945ea2dc-utilities\") pod \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\" (UID: \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\") " Jan 29 12:39:43 crc kubenswrapper[4840]: I0129 12:39:43.810005 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zl858\" (UniqueName: \"kubernetes.io/projected/7d5eef78-019e-4d19-b57c-df0e945ea2dc-kube-api-access-zl858\") pod \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\" (UID: \"7d5eef78-019e-4d19-b57c-df0e945ea2dc\") " Jan 29 12:39:43 crc kubenswrapper[4840]: I0129 12:39:43.811686 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d5eef78-019e-4d19-b57c-df0e945ea2dc-utilities" (OuterVolumeSpecName: "utilities") pod "7d5eef78-019e-4d19-b57c-df0e945ea2dc" (UID: "7d5eef78-019e-4d19-b57c-df0e945ea2dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:39:43 crc kubenswrapper[4840]: I0129 12:39:43.824756 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5eef78-019e-4d19-b57c-df0e945ea2dc-kube-api-access-zl858" (OuterVolumeSpecName: "kube-api-access-zl858") pod "7d5eef78-019e-4d19-b57c-df0e945ea2dc" (UID: "7d5eef78-019e-4d19-b57c-df0e945ea2dc"). InnerVolumeSpecName "kube-api-access-zl858". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:39:43 crc kubenswrapper[4840]: I0129 12:39:43.911716 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zl858\" (UniqueName: \"kubernetes.io/projected/7d5eef78-019e-4d19-b57c-df0e945ea2dc-kube-api-access-zl858\") on node \"crc\" DevicePath \"\"" Jan 29 12:39:43 crc kubenswrapper[4840]: I0129 12:39:43.911752 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d5eef78-019e-4d19-b57c-df0e945ea2dc-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:39:43 crc kubenswrapper[4840]: I0129 12:39:43.938227 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d5eef78-019e-4d19-b57c-df0e945ea2dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d5eef78-019e-4d19-b57c-df0e945ea2dc" (UID: "7d5eef78-019e-4d19-b57c-df0e945ea2dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.013118 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d5eef78-019e-4d19-b57c-df0e945ea2dc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.329042 4840 generic.go:334] "Generic (PLEG): container finished" podID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" containerID="075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb" exitCode=0 Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.329095 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5rthh" Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.329109 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rthh" event={"ID":"7d5eef78-019e-4d19-b57c-df0e945ea2dc","Type":"ContainerDied","Data":"075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb"} Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.329209 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5rthh" event={"ID":"7d5eef78-019e-4d19-b57c-df0e945ea2dc","Type":"ContainerDied","Data":"010aa576ac7a1c93b03bceb505117801ed2c37b6e873831cb123e54e73aeecba"} Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.329266 4840 scope.go:117] "RemoveContainer" containerID="075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb" Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.348558 4840 scope.go:117] "RemoveContainer" containerID="0080a6bd305e8f8b97169cd5b051f03efe0840767c0bb32df697194a40e7dc16" Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.544843 4840 scope.go:117] "RemoveContainer" containerID="58d6d6cef4acfab8dc9e2be35d38977d5cc47835a3ff48f2d6b2b6cdca88497d" Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.557972 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5rthh"] Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.563228 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5rthh"] Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.578215 4840 scope.go:117] "RemoveContainer" containerID="075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb" Jan 29 12:39:44 crc kubenswrapper[4840]: E0129 12:39:44.578702 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb\": container with ID starting with 075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb not found: ID does not exist" containerID="075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb" Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.578760 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb"} err="failed to get container status \"075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb\": rpc error: code = NotFound desc = could not find container \"075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb\": container with ID starting with 075df5937171ad8a60d9bbb19f856cbf5e757537629c25e40267c60f94e71ffb not found: ID does not exist" Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.578796 4840 scope.go:117] "RemoveContainer" containerID="0080a6bd305e8f8b97169cd5b051f03efe0840767c0bb32df697194a40e7dc16" Jan 29 12:39:44 crc kubenswrapper[4840]: E0129 12:39:44.579417 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0080a6bd305e8f8b97169cd5b051f03efe0840767c0bb32df697194a40e7dc16\": container with ID starting with 0080a6bd305e8f8b97169cd5b051f03efe0840767c0bb32df697194a40e7dc16 not found: ID does not exist" containerID="0080a6bd305e8f8b97169cd5b051f03efe0840767c0bb32df697194a40e7dc16" Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.579437 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0080a6bd305e8f8b97169cd5b051f03efe0840767c0bb32df697194a40e7dc16"} err="failed to get container status \"0080a6bd305e8f8b97169cd5b051f03efe0840767c0bb32df697194a40e7dc16\": rpc error: code = NotFound desc = could not find container \"0080a6bd305e8f8b97169cd5b051f03efe0840767c0bb32df697194a40e7dc16\": container with ID starting with 0080a6bd305e8f8b97169cd5b051f03efe0840767c0bb32df697194a40e7dc16 not found: ID does not exist" Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.579451 4840 scope.go:117] "RemoveContainer" containerID="58d6d6cef4acfab8dc9e2be35d38977d5cc47835a3ff48f2d6b2b6cdca88497d" Jan 29 12:39:44 crc kubenswrapper[4840]: E0129 12:39:44.579744 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d6d6cef4acfab8dc9e2be35d38977d5cc47835a3ff48f2d6b2b6cdca88497d\": container with ID starting with 58d6d6cef4acfab8dc9e2be35d38977d5cc47835a3ff48f2d6b2b6cdca88497d not found: ID does not exist" containerID="58d6d6cef4acfab8dc9e2be35d38977d5cc47835a3ff48f2d6b2b6cdca88497d" Jan 29 12:39:44 crc kubenswrapper[4840]: I0129 12:39:44.579758 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d6d6cef4acfab8dc9e2be35d38977d5cc47835a3ff48f2d6b2b6cdca88497d"} err="failed to get container status \"58d6d6cef4acfab8dc9e2be35d38977d5cc47835a3ff48f2d6b2b6cdca88497d\": rpc error: code = NotFound desc = could not find container \"58d6d6cef4acfab8dc9e2be35d38977d5cc47835a3ff48f2d6b2b6cdca88497d\": container with ID starting with 58d6d6cef4acfab8dc9e2be35d38977d5cc47835a3ff48f2d6b2b6cdca88497d not found: ID does not exist" Jan 29 12:39:45 crc kubenswrapper[4840]: I0129 12:39:45.013396 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" path="/var/lib/kubelet/pods/7d5eef78-019e-4d19-b57c-df0e945ea2dc/volumes" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.181091 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wnzlg"] Jan 29 12:41:42 crc kubenswrapper[4840]: E0129 12:41:42.183327 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" containerName="registry-server" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.183361 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" containerName="registry-server" Jan 29 12:41:42 crc kubenswrapper[4840]: E0129 12:41:42.183409 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" containerName="extract-utilities" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.183422 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" containerName="extract-utilities" Jan 29 12:41:42 crc kubenswrapper[4840]: E0129 12:41:42.183439 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" containerName="extract-content" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.183452 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" containerName="extract-content" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.183724 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d5eef78-019e-4d19-b57c-df0e945ea2dc" containerName="registry-server" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.185737 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.191088 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnzlg"] Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.293431 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-utilities\") pod \"community-operators-wnzlg\" (UID: \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\") " pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.293548 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-catalog-content\") pod \"community-operators-wnzlg\" (UID: \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\") " pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.293706 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krl98\" (UniqueName: \"kubernetes.io/projected/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-kube-api-access-krl98\") pod \"community-operators-wnzlg\" (UID: \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\") " pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.396401 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krl98\" (UniqueName: \"kubernetes.io/projected/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-kube-api-access-krl98\") pod \"community-operators-wnzlg\" (UID: \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\") " pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.396567 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-utilities\") pod \"community-operators-wnzlg\" (UID: \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\") " pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.396705 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-catalog-content\") pod \"community-operators-wnzlg\" (UID: \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\") " pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.397315 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-utilities\") pod \"community-operators-wnzlg\" (UID: \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\") " pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.397553 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-catalog-content\") pod \"community-operators-wnzlg\" (UID: \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\") " pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.429657 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krl98\" (UniqueName: \"kubernetes.io/projected/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-kube-api-access-krl98\") pod \"community-operators-wnzlg\" (UID: \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\") " pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:42 crc kubenswrapper[4840]: I0129 12:41:42.519452 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:44 crc kubenswrapper[4840]: I0129 12:41:44.969460 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wnzlg"] Jan 29 12:41:45 crc kubenswrapper[4840]: I0129 12:41:45.328445 4840 generic.go:334] "Generic (PLEG): container finished" podID="c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" containerID="85fb9f1cbc8deaa5e26deaddaeb4ea40a0a383c748e0d75bb1da913b55d77dd1" exitCode=0 Jan 29 12:41:45 crc kubenswrapper[4840]: I0129 12:41:45.328538 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnzlg" event={"ID":"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7","Type":"ContainerDied","Data":"85fb9f1cbc8deaa5e26deaddaeb4ea40a0a383c748e0d75bb1da913b55d77dd1"} Jan 29 12:41:45 crc kubenswrapper[4840]: I0129 12:41:45.329037 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnzlg" event={"ID":"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7","Type":"ContainerStarted","Data":"bddde61678c75a5af249e2ccc6d507763b96dd86802c0c4c0cb815d9bae48740"} Jan 29 12:41:48 crc kubenswrapper[4840]: I0129 12:41:48.356190 4840 generic.go:334] "Generic (PLEG): container finished" podID="c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" containerID="c2b30741e4d36142409a0a3a3e8af5eed138e883e15700ba8d137b6a6834a632" exitCode=0 Jan 29 12:41:48 crc kubenswrapper[4840]: I0129 12:41:48.356277 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnzlg" event={"ID":"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7","Type":"ContainerDied","Data":"c2b30741e4d36142409a0a3a3e8af5eed138e883e15700ba8d137b6a6834a632"} Jan 29 12:41:49 crc kubenswrapper[4840]: I0129 12:41:49.367331 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnzlg" event={"ID":"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7","Type":"ContainerStarted","Data":"d0b46dca8c56de80f720dcff573e22b23701e7284ee35ce6b2984c2146132c80"} Jan 29 12:41:49 crc kubenswrapper[4840]: I0129 12:41:49.403384 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wnzlg" podStartSLOduration=4.819387247 podStartE2EDuration="7.403335952s" podCreationTimestamp="2026-01-29 12:41:42 +0000 UTC" firstStartedPulling="2026-01-29 12:41:46.338246857 +0000 UTC m=+2238.001226750" lastFinishedPulling="2026-01-29 12:41:48.922195552 +0000 UTC m=+2240.585175455" observedRunningTime="2026-01-29 12:41:49.392994331 +0000 UTC m=+2241.055974244" watchObservedRunningTime="2026-01-29 12:41:49.403335952 +0000 UTC m=+2241.066315885" Jan 29 12:41:52 crc kubenswrapper[4840]: I0129 12:41:52.520342 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:52 crc kubenswrapper[4840]: I0129 12:41:52.520832 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:52 crc kubenswrapper[4840]: I0129 12:41:52.586329 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:41:53 crc kubenswrapper[4840]: I0129 12:41:53.522178 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:41:53 crc kubenswrapper[4840]: I0129 12:41:53.522280 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:42:02 crc kubenswrapper[4840]: I0129 12:42:02.602260 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:42:02 crc kubenswrapper[4840]: I0129 12:42:02.679496 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnzlg"] Jan 29 12:42:03 crc kubenswrapper[4840]: I0129 12:42:03.498198 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wnzlg" podUID="c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" containerName="registry-server" containerID="cri-o://d0b46dca8c56de80f720dcff573e22b23701e7284ee35ce6b2984c2146132c80" gracePeriod=2 Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.508484 4840 generic.go:334] "Generic (PLEG): container finished" podID="c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" containerID="d0b46dca8c56de80f720dcff573e22b23701e7284ee35ce6b2984c2146132c80" exitCode=0 Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.508542 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnzlg" event={"ID":"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7","Type":"ContainerDied","Data":"d0b46dca8c56de80f720dcff573e22b23701e7284ee35ce6b2984c2146132c80"} Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.509016 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wnzlg" event={"ID":"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7","Type":"ContainerDied","Data":"bddde61678c75a5af249e2ccc6d507763b96dd86802c0c4c0cb815d9bae48740"} Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.509030 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bddde61678c75a5af249e2ccc6d507763b96dd86802c0c4c0cb815d9bae48740" Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.534259 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.688159 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-utilities\") pod \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\" (UID: \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\") " Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.688290 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krl98\" (UniqueName: \"kubernetes.io/projected/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-kube-api-access-krl98\") pod \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\" (UID: \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\") " Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.688408 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-catalog-content\") pod \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\" (UID: \"c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7\") " Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.690008 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-utilities" (OuterVolumeSpecName: "utilities") pod "c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" (UID: "c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.695555 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-kube-api-access-krl98" (OuterVolumeSpecName: "kube-api-access-krl98") pod "c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" (UID: "c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7"). InnerVolumeSpecName "kube-api-access-krl98". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.739150 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" (UID: "c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.789772 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.789815 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krl98\" (UniqueName: \"kubernetes.io/projected/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-kube-api-access-krl98\") on node \"crc\" DevicePath \"\"" Jan 29 12:42:04 crc kubenswrapper[4840]: I0129 12:42:04.789828 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:42:05 crc kubenswrapper[4840]: I0129 12:42:05.517223 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wnzlg" Jan 29 12:42:05 crc kubenswrapper[4840]: I0129 12:42:05.544927 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wnzlg"] Jan 29 12:42:05 crc kubenswrapper[4840]: I0129 12:42:05.553909 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wnzlg"] Jan 29 12:42:07 crc kubenswrapper[4840]: I0129 12:42:07.014434 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" path="/var/lib/kubelet/pods/c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7/volumes" Jan 29 12:42:23 crc kubenswrapper[4840]: I0129 12:42:23.521673 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:42:23 crc kubenswrapper[4840]: I0129 12:42:23.522419 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.265695 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-m6x64"] Jan 29 12:42:48 crc kubenswrapper[4840]: E0129 12:42:48.267933 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" containerName="registry-server" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.267977 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" containerName="registry-server" Jan 29 12:42:48 crc kubenswrapper[4840]: E0129 12:42:48.267993 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" containerName="extract-content" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.268000 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" containerName="extract-content" Jan 29 12:42:48 crc kubenswrapper[4840]: E0129 12:42:48.268031 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" containerName="extract-utilities" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.268038 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" containerName="extract-utilities" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.268205 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="c97cd4c6-ebe9-4982-a67c-741bfa4ec9c7" containerName="registry-server" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.269397 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.288829 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6x64"] Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.404766 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d46ddf-2348-4d80-b3b5-030116219c8f-catalog-content\") pod \"redhat-marketplace-m6x64\" (UID: \"81d46ddf-2348-4d80-b3b5-030116219c8f\") " pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.405365 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-796lr\" (UniqueName: \"kubernetes.io/projected/81d46ddf-2348-4d80-b3b5-030116219c8f-kube-api-access-796lr\") pod \"redhat-marketplace-m6x64\" (UID: \"81d46ddf-2348-4d80-b3b5-030116219c8f\") " pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.405544 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d46ddf-2348-4d80-b3b5-030116219c8f-utilities\") pod \"redhat-marketplace-m6x64\" (UID: \"81d46ddf-2348-4d80-b3b5-030116219c8f\") " pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.506708 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d46ddf-2348-4d80-b3b5-030116219c8f-catalog-content\") pod \"redhat-marketplace-m6x64\" (UID: \"81d46ddf-2348-4d80-b3b5-030116219c8f\") " pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.507380 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d46ddf-2348-4d80-b3b5-030116219c8f-catalog-content\") pod \"redhat-marketplace-m6x64\" (UID: \"81d46ddf-2348-4d80-b3b5-030116219c8f\") " pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.507813 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-796lr\" (UniqueName: \"kubernetes.io/projected/81d46ddf-2348-4d80-b3b5-030116219c8f-kube-api-access-796lr\") pod \"redhat-marketplace-m6x64\" (UID: \"81d46ddf-2348-4d80-b3b5-030116219c8f\") " pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.508130 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d46ddf-2348-4d80-b3b5-030116219c8f-utilities\") pod \"redhat-marketplace-m6x64\" (UID: \"81d46ddf-2348-4d80-b3b5-030116219c8f\") " pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.508656 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d46ddf-2348-4d80-b3b5-030116219c8f-utilities\") pod \"redhat-marketplace-m6x64\" (UID: \"81d46ddf-2348-4d80-b3b5-030116219c8f\") " pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.533753 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-796lr\" (UniqueName: \"kubernetes.io/projected/81d46ddf-2348-4d80-b3b5-030116219c8f-kube-api-access-796lr\") pod \"redhat-marketplace-m6x64\" (UID: \"81d46ddf-2348-4d80-b3b5-030116219c8f\") " pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:48 crc kubenswrapper[4840]: I0129 12:42:48.609211 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:49 crc kubenswrapper[4840]: I0129 12:42:49.100357 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6x64"] Jan 29 12:42:49 crc kubenswrapper[4840]: I0129 12:42:49.874101 4840 generic.go:334] "Generic (PLEG): container finished" podID="81d46ddf-2348-4d80-b3b5-030116219c8f" containerID="3d64fa2eee21a98811b2a4f7393d3f9975b0eb90f444c69543ba21b870575643" exitCode=0 Jan 29 12:42:49 crc kubenswrapper[4840]: I0129 12:42:49.874214 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6x64" event={"ID":"81d46ddf-2348-4d80-b3b5-030116219c8f","Type":"ContainerDied","Data":"3d64fa2eee21a98811b2a4f7393d3f9975b0eb90f444c69543ba21b870575643"} Jan 29 12:42:49 crc kubenswrapper[4840]: I0129 12:42:49.874562 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6x64" event={"ID":"81d46ddf-2348-4d80-b3b5-030116219c8f","Type":"ContainerStarted","Data":"42a411db24bde7a131c19d9a9dad1833a530180b0231c24ddd740a6842ae6512"} Jan 29 12:42:51 crc kubenswrapper[4840]: I0129 12:42:51.913842 4840 generic.go:334] "Generic (PLEG): container finished" podID="81d46ddf-2348-4d80-b3b5-030116219c8f" containerID="5f1f023cb26f93b9672b80513932d9ad349f1927a51a801c5683de4f564817be" exitCode=0 Jan 29 12:42:51 crc kubenswrapper[4840]: I0129 12:42:51.913936 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6x64" event={"ID":"81d46ddf-2348-4d80-b3b5-030116219c8f","Type":"ContainerDied","Data":"5f1f023cb26f93b9672b80513932d9ad349f1927a51a801c5683de4f564817be"} Jan 29 12:42:53 crc kubenswrapper[4840]: I0129 12:42:53.521778 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:42:53 crc kubenswrapper[4840]: I0129 12:42:53.522355 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:42:53 crc kubenswrapper[4840]: I0129 12:42:53.522425 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:42:53 crc kubenswrapper[4840]: I0129 12:42:53.523377 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c"} pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:42:53 crc kubenswrapper[4840]: I0129 12:42:53.523452 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" containerID="cri-o://2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" gracePeriod=600 Jan 29 12:42:53 crc kubenswrapper[4840]: E0129 12:42:53.711326 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:42:53 crc kubenswrapper[4840]: I0129 12:42:53.936693 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" exitCode=0 Jan 29 12:42:53 crc kubenswrapper[4840]: I0129 12:42:53.936784 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerDied","Data":"2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c"} Jan 29 12:42:53 crc kubenswrapper[4840]: I0129 12:42:53.937180 4840 scope.go:117] "RemoveContainer" containerID="fa25ef95e49b0a7c454bf39ea1dd39dfa91fc9e8ce01036359a977f3eaac842d" Jan 29 12:42:53 crc kubenswrapper[4840]: I0129 12:42:53.938084 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:42:53 crc kubenswrapper[4840]: E0129 12:42:53.938432 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:42:53 crc kubenswrapper[4840]: I0129 12:42:53.940568 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6x64" event={"ID":"81d46ddf-2348-4d80-b3b5-030116219c8f","Type":"ContainerStarted","Data":"3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d"} Jan 29 12:42:53 crc kubenswrapper[4840]: I0129 12:42:53.990198 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-m6x64" podStartSLOduration=3.146342621 podStartE2EDuration="5.990177616s" podCreationTimestamp="2026-01-29 12:42:48 +0000 UTC" firstStartedPulling="2026-01-29 12:42:49.876015787 +0000 UTC m=+2301.538995720" lastFinishedPulling="2026-01-29 12:42:52.719850822 +0000 UTC m=+2304.382830715" observedRunningTime="2026-01-29 12:42:53.981764628 +0000 UTC m=+2305.644744521" watchObservedRunningTime="2026-01-29 12:42:53.990177616 +0000 UTC m=+2305.653157509" Jan 29 12:42:58 crc kubenswrapper[4840]: I0129 12:42:58.609399 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:58 crc kubenswrapper[4840]: I0129 12:42:58.610274 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:58 crc kubenswrapper[4840]: I0129 12:42:58.655105 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:59 crc kubenswrapper[4840]: I0129 12:42:59.039830 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:42:59 crc kubenswrapper[4840]: I0129 12:42:59.103248 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6x64"] Jan 29 12:43:00 crc kubenswrapper[4840]: I0129 12:43:00.997399 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-m6x64" podUID="81d46ddf-2348-4d80-b3b5-030116219c8f" containerName="registry-server" containerID="cri-o://3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d" gracePeriod=2 Jan 29 12:43:01 crc kubenswrapper[4840]: I0129 12:43:01.424765 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:43:01 crc kubenswrapper[4840]: I0129 12:43:01.535770 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d46ddf-2348-4d80-b3b5-030116219c8f-catalog-content\") pod \"81d46ddf-2348-4d80-b3b5-030116219c8f\" (UID: \"81d46ddf-2348-4d80-b3b5-030116219c8f\") " Jan 29 12:43:01 crc kubenswrapper[4840]: I0129 12:43:01.535886 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d46ddf-2348-4d80-b3b5-030116219c8f-utilities\") pod \"81d46ddf-2348-4d80-b3b5-030116219c8f\" (UID: \"81d46ddf-2348-4d80-b3b5-030116219c8f\") " Jan 29 12:43:01 crc kubenswrapper[4840]: I0129 12:43:01.536541 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-796lr\" (UniqueName: \"kubernetes.io/projected/81d46ddf-2348-4d80-b3b5-030116219c8f-kube-api-access-796lr\") pod \"81d46ddf-2348-4d80-b3b5-030116219c8f\" (UID: \"81d46ddf-2348-4d80-b3b5-030116219c8f\") " Jan 29 12:43:01 crc kubenswrapper[4840]: I0129 12:43:01.537002 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d46ddf-2348-4d80-b3b5-030116219c8f-utilities" (OuterVolumeSpecName: "utilities") pod "81d46ddf-2348-4d80-b3b5-030116219c8f" (UID: "81d46ddf-2348-4d80-b3b5-030116219c8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:43:01 crc kubenswrapper[4840]: I0129 12:43:01.543907 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81d46ddf-2348-4d80-b3b5-030116219c8f-kube-api-access-796lr" (OuterVolumeSpecName: "kube-api-access-796lr") pod "81d46ddf-2348-4d80-b3b5-030116219c8f" (UID: "81d46ddf-2348-4d80-b3b5-030116219c8f"). InnerVolumeSpecName "kube-api-access-796lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:43:01 crc kubenswrapper[4840]: I0129 12:43:01.563034 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81d46ddf-2348-4d80-b3b5-030116219c8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81d46ddf-2348-4d80-b3b5-030116219c8f" (UID: "81d46ddf-2348-4d80-b3b5-030116219c8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:43:01 crc kubenswrapper[4840]: I0129 12:43:01.638444 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81d46ddf-2348-4d80-b3b5-030116219c8f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:43:01 crc kubenswrapper[4840]: I0129 12:43:01.638489 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81d46ddf-2348-4d80-b3b5-030116219c8f-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:43:01 crc kubenswrapper[4840]: I0129 12:43:01.638503 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-796lr\" (UniqueName: \"kubernetes.io/projected/81d46ddf-2348-4d80-b3b5-030116219c8f-kube-api-access-796lr\") on node \"crc\" DevicePath \"\"" Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.009424 4840 generic.go:334] "Generic (PLEG): container finished" podID="81d46ddf-2348-4d80-b3b5-030116219c8f" containerID="3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d" exitCode=0 Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.009500 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6x64" event={"ID":"81d46ddf-2348-4d80-b3b5-030116219c8f","Type":"ContainerDied","Data":"3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d"} Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.009575 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-m6x64" Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.009607 4840 scope.go:117] "RemoveContainer" containerID="3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d" Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.009590 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-m6x64" event={"ID":"81d46ddf-2348-4d80-b3b5-030116219c8f","Type":"ContainerDied","Data":"42a411db24bde7a131c19d9a9dad1833a530180b0231c24ddd740a6842ae6512"} Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.034351 4840 scope.go:117] "RemoveContainer" containerID="5f1f023cb26f93b9672b80513932d9ad349f1927a51a801c5683de4f564817be" Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.047309 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6x64"] Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.057350 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-m6x64"] Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.082447 4840 scope.go:117] "RemoveContainer" containerID="3d64fa2eee21a98811b2a4f7393d3f9975b0eb90f444c69543ba21b870575643" Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.104808 4840 scope.go:117] "RemoveContainer" containerID="3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d" Jan 29 12:43:02 crc kubenswrapper[4840]: E0129 12:43:02.105639 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d\": container with ID starting with 3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d not found: ID does not exist" containerID="3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d" Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.105689 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d"} err="failed to get container status \"3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d\": rpc error: code = NotFound desc = could not find container \"3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d\": container with ID starting with 3dd35c1796282ac64c97331d1a820827ba88b80e81effa1b4157b5f11d492f5d not found: ID does not exist" Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.105725 4840 scope.go:117] "RemoveContainer" containerID="5f1f023cb26f93b9672b80513932d9ad349f1927a51a801c5683de4f564817be" Jan 29 12:43:02 crc kubenswrapper[4840]: E0129 12:43:02.106586 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f1f023cb26f93b9672b80513932d9ad349f1927a51a801c5683de4f564817be\": container with ID starting with 5f1f023cb26f93b9672b80513932d9ad349f1927a51a801c5683de4f564817be not found: ID does not exist" containerID="5f1f023cb26f93b9672b80513932d9ad349f1927a51a801c5683de4f564817be" Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.106636 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1f023cb26f93b9672b80513932d9ad349f1927a51a801c5683de4f564817be"} err="failed to get container status \"5f1f023cb26f93b9672b80513932d9ad349f1927a51a801c5683de4f564817be\": rpc error: code = NotFound desc = could not find container \"5f1f023cb26f93b9672b80513932d9ad349f1927a51a801c5683de4f564817be\": container with ID starting with 5f1f023cb26f93b9672b80513932d9ad349f1927a51a801c5683de4f564817be not found: ID does not exist" Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.106675 4840 scope.go:117] "RemoveContainer" containerID="3d64fa2eee21a98811b2a4f7393d3f9975b0eb90f444c69543ba21b870575643" Jan 29 12:43:02 crc kubenswrapper[4840]: E0129 12:43:02.107286 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d64fa2eee21a98811b2a4f7393d3f9975b0eb90f444c69543ba21b870575643\": container with ID starting with 3d64fa2eee21a98811b2a4f7393d3f9975b0eb90f444c69543ba21b870575643 not found: ID does not exist" containerID="3d64fa2eee21a98811b2a4f7393d3f9975b0eb90f444c69543ba21b870575643" Jan 29 12:43:02 crc kubenswrapper[4840]: I0129 12:43:02.107341 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d64fa2eee21a98811b2a4f7393d3f9975b0eb90f444c69543ba21b870575643"} err="failed to get container status \"3d64fa2eee21a98811b2a4f7393d3f9975b0eb90f444c69543ba21b870575643\": rpc error: code = NotFound desc = could not find container \"3d64fa2eee21a98811b2a4f7393d3f9975b0eb90f444c69543ba21b870575643\": container with ID starting with 3d64fa2eee21a98811b2a4f7393d3f9975b0eb90f444c69543ba21b870575643 not found: ID does not exist" Jan 29 12:43:03 crc kubenswrapper[4840]: I0129 12:43:03.013151 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81d46ddf-2348-4d80-b3b5-030116219c8f" path="/var/lib/kubelet/pods/81d46ddf-2348-4d80-b3b5-030116219c8f/volumes" Jan 29 12:43:08 crc kubenswrapper[4840]: I0129 12:43:08.001440 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:43:08 crc kubenswrapper[4840]: E0129 12:43:08.002327 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:43:22 crc kubenswrapper[4840]: I0129 12:43:22.002192 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:43:22 crc kubenswrapper[4840]: E0129 12:43:22.003405 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:43:37 crc kubenswrapper[4840]: I0129 12:43:37.002006 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:43:37 crc kubenswrapper[4840]: E0129 12:43:37.003014 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:43:48 crc kubenswrapper[4840]: I0129 12:43:48.002225 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:43:48 crc kubenswrapper[4840]: E0129 12:43:48.004127 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:43:59 crc kubenswrapper[4840]: I0129 12:43:59.006360 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:43:59 crc kubenswrapper[4840]: E0129 12:43:59.007447 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:44:14 crc kubenswrapper[4840]: I0129 12:44:14.002142 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:44:14 crc kubenswrapper[4840]: E0129 12:44:14.003300 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:44:26 crc kubenswrapper[4840]: I0129 12:44:26.002008 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:44:26 crc kubenswrapper[4840]: E0129 12:44:26.002833 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:44:37 crc kubenswrapper[4840]: I0129 12:44:37.001578 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:44:37 crc kubenswrapper[4840]: E0129 12:44:37.002515 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:44:50 crc kubenswrapper[4840]: I0129 12:44:50.001543 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:44:50 crc kubenswrapper[4840]: E0129 12:44:50.002984 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.156354 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2"] Jan 29 12:45:00 crc kubenswrapper[4840]: E0129 12:45:00.157457 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d46ddf-2348-4d80-b3b5-030116219c8f" containerName="registry-server" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.157472 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d46ddf-2348-4d80-b3b5-030116219c8f" containerName="registry-server" Jan 29 12:45:00 crc kubenswrapper[4840]: E0129 12:45:00.157501 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d46ddf-2348-4d80-b3b5-030116219c8f" containerName="extract-utilities" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.157508 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d46ddf-2348-4d80-b3b5-030116219c8f" containerName="extract-utilities" Jan 29 12:45:00 crc kubenswrapper[4840]: E0129 12:45:00.157528 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81d46ddf-2348-4d80-b3b5-030116219c8f" containerName="extract-content" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.157534 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="81d46ddf-2348-4d80-b3b5-030116219c8f" containerName="extract-content" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.157716 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="81d46ddf-2348-4d80-b3b5-030116219c8f" containerName="registry-server" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.158658 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.162492 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.162610 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.180534 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2"] Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.254158 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97de4a9c-1ef2-428d-ba4d-7375e0219c93-config-volume\") pod \"collect-profiles-29494845-2l8c2\" (UID: \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.254235 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfxhl\" (UniqueName: \"kubernetes.io/projected/97de4a9c-1ef2-428d-ba4d-7375e0219c93-kube-api-access-kfxhl\") pod \"collect-profiles-29494845-2l8c2\" (UID: \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.254293 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97de4a9c-1ef2-428d-ba4d-7375e0219c93-secret-volume\") pod \"collect-profiles-29494845-2l8c2\" (UID: \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.355340 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97de4a9c-1ef2-428d-ba4d-7375e0219c93-config-volume\") pod \"collect-profiles-29494845-2l8c2\" (UID: \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.355415 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfxhl\" (UniqueName: \"kubernetes.io/projected/97de4a9c-1ef2-428d-ba4d-7375e0219c93-kube-api-access-kfxhl\") pod \"collect-profiles-29494845-2l8c2\" (UID: \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.355468 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97de4a9c-1ef2-428d-ba4d-7375e0219c93-secret-volume\") pod \"collect-profiles-29494845-2l8c2\" (UID: \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.356485 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97de4a9c-1ef2-428d-ba4d-7375e0219c93-config-volume\") pod \"collect-profiles-29494845-2l8c2\" (UID: \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.363088 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97de4a9c-1ef2-428d-ba4d-7375e0219c93-secret-volume\") pod \"collect-profiles-29494845-2l8c2\" (UID: \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.376008 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfxhl\" (UniqueName: \"kubernetes.io/projected/97de4a9c-1ef2-428d-ba4d-7375e0219c93-kube-api-access-kfxhl\") pod \"collect-profiles-29494845-2l8c2\" (UID: \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.486288 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:00 crc kubenswrapper[4840]: I0129 12:45:00.940791 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2"] Jan 29 12:45:01 crc kubenswrapper[4840]: I0129 12:45:01.092382 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" event={"ID":"97de4a9c-1ef2-428d-ba4d-7375e0219c93","Type":"ContainerStarted","Data":"b4c661d84d175cd827226428a6eff8b2b262a1fa4119521a7d2cf9221cee7101"} Jan 29 12:45:02 crc kubenswrapper[4840]: I0129 12:45:02.104445 4840 generic.go:334] "Generic (PLEG): container finished" podID="97de4a9c-1ef2-428d-ba4d-7375e0219c93" containerID="5a652fe56e9bd004ef1a0c9af20b29043bb08017b8588cd9365208ebbc578db6" exitCode=0 Jan 29 12:45:02 crc kubenswrapper[4840]: I0129 12:45:02.104571 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" event={"ID":"97de4a9c-1ef2-428d-ba4d-7375e0219c93","Type":"ContainerDied","Data":"5a652fe56e9bd004ef1a0c9af20b29043bb08017b8588cd9365208ebbc578db6"} Jan 29 12:45:03 crc kubenswrapper[4840]: I0129 12:45:03.393700 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:03 crc kubenswrapper[4840]: I0129 12:45:03.521119 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfxhl\" (UniqueName: \"kubernetes.io/projected/97de4a9c-1ef2-428d-ba4d-7375e0219c93-kube-api-access-kfxhl\") pod \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\" (UID: \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\") " Jan 29 12:45:03 crc kubenswrapper[4840]: I0129 12:45:03.521186 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97de4a9c-1ef2-428d-ba4d-7375e0219c93-config-volume\") pod \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\" (UID: \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\") " Jan 29 12:45:03 crc kubenswrapper[4840]: I0129 12:45:03.521391 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97de4a9c-1ef2-428d-ba4d-7375e0219c93-secret-volume\") pod \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\" (UID: \"97de4a9c-1ef2-428d-ba4d-7375e0219c93\") " Jan 29 12:45:03 crc kubenswrapper[4840]: I0129 12:45:03.522790 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/97de4a9c-1ef2-428d-ba4d-7375e0219c93-config-volume" (OuterVolumeSpecName: "config-volume") pod "97de4a9c-1ef2-428d-ba4d-7375e0219c93" (UID: "97de4a9c-1ef2-428d-ba4d-7375e0219c93"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 12:45:03 crc kubenswrapper[4840]: I0129 12:45:03.529069 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97de4a9c-1ef2-428d-ba4d-7375e0219c93-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "97de4a9c-1ef2-428d-ba4d-7375e0219c93" (UID: "97de4a9c-1ef2-428d-ba4d-7375e0219c93"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 12:45:03 crc kubenswrapper[4840]: I0129 12:45:03.529716 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97de4a9c-1ef2-428d-ba4d-7375e0219c93-kube-api-access-kfxhl" (OuterVolumeSpecName: "kube-api-access-kfxhl") pod "97de4a9c-1ef2-428d-ba4d-7375e0219c93" (UID: "97de4a9c-1ef2-428d-ba4d-7375e0219c93"). InnerVolumeSpecName "kube-api-access-kfxhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:45:03 crc kubenswrapper[4840]: I0129 12:45:03.623061 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/97de4a9c-1ef2-428d-ba4d-7375e0219c93-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:45:03 crc kubenswrapper[4840]: I0129 12:45:03.623435 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfxhl\" (UniqueName: \"kubernetes.io/projected/97de4a9c-1ef2-428d-ba4d-7375e0219c93-kube-api-access-kfxhl\") on node \"crc\" DevicePath \"\"" Jan 29 12:45:03 crc kubenswrapper[4840]: I0129 12:45:03.623515 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/97de4a9c-1ef2-428d-ba4d-7375e0219c93-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 12:45:04 crc kubenswrapper[4840]: I0129 12:45:04.002034 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:45:04 crc kubenswrapper[4840]: E0129 12:45:04.002457 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:45:04 crc kubenswrapper[4840]: I0129 12:45:04.122899 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" event={"ID":"97de4a9c-1ef2-428d-ba4d-7375e0219c93","Type":"ContainerDied","Data":"b4c661d84d175cd827226428a6eff8b2b262a1fa4119521a7d2cf9221cee7101"} Jan 29 12:45:04 crc kubenswrapper[4840]: I0129 12:45:04.122980 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4c661d84d175cd827226428a6eff8b2b262a1fa4119521a7d2cf9221cee7101" Jan 29 12:45:04 crc kubenswrapper[4840]: I0129 12:45:04.122989 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494845-2l8c2" Jan 29 12:45:04 crc kubenswrapper[4840]: I0129 12:45:04.474287 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl"] Jan 29 12:45:04 crc kubenswrapper[4840]: I0129 12:45:04.481159 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494800-d6wvl"] Jan 29 12:45:05 crc kubenswrapper[4840]: I0129 12:45:05.017978 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe84cd00-e72e-446a-981d-5f7c0e9304af" path="/var/lib/kubelet/pods/fe84cd00-e72e-446a-981d-5f7c0e9304af/volumes" Jan 29 12:45:18 crc kubenswrapper[4840]: I0129 12:45:18.001521 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:45:18 crc kubenswrapper[4840]: E0129 12:45:18.002514 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:45:33 crc kubenswrapper[4840]: I0129 12:45:33.002026 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:45:33 crc kubenswrapper[4840]: E0129 12:45:33.003259 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:45:41 crc kubenswrapper[4840]: I0129 12:45:41.302197 4840 scope.go:117] "RemoveContainer" containerID="e4adac4ebc544b3753286665b6bda3eda7f0d707c5c94fc11ede0e37086f2f4f" Jan 29 12:45:46 crc kubenswrapper[4840]: I0129 12:45:46.001091 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:45:46 crc kubenswrapper[4840]: E0129 12:45:46.001814 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:45:57 crc kubenswrapper[4840]: I0129 12:45:57.001830 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:45:57 crc kubenswrapper[4840]: E0129 12:45:57.003029 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:46:10 crc kubenswrapper[4840]: I0129 12:46:10.001664 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:46:10 crc kubenswrapper[4840]: E0129 12:46:10.004555 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:46:25 crc kubenswrapper[4840]: I0129 12:46:25.001595 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:46:25 crc kubenswrapper[4840]: E0129 12:46:25.002712 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:46:37 crc kubenswrapper[4840]: I0129 12:46:37.003766 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:46:37 crc kubenswrapper[4840]: E0129 12:46:37.005327 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:46:50 crc kubenswrapper[4840]: I0129 12:46:50.002398 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:46:50 crc kubenswrapper[4840]: E0129 12:46:50.003463 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:47:01 crc kubenswrapper[4840]: I0129 12:47:01.001681 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:47:01 crc kubenswrapper[4840]: E0129 12:47:01.003813 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:47:15 crc kubenswrapper[4840]: I0129 12:47:15.001872 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:47:15 crc kubenswrapper[4840]: E0129 12:47:15.002675 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:47:27 crc kubenswrapper[4840]: I0129 12:47:27.002537 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:47:27 crc kubenswrapper[4840]: E0129 12:47:27.006732 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:47:28 crc kubenswrapper[4840]: I0129 12:47:28.776465 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gttsm"] Jan 29 12:47:28 crc kubenswrapper[4840]: E0129 12:47:28.777248 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97de4a9c-1ef2-428d-ba4d-7375e0219c93" containerName="collect-profiles" Jan 29 12:47:28 crc kubenswrapper[4840]: I0129 12:47:28.777268 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="97de4a9c-1ef2-428d-ba4d-7375e0219c93" containerName="collect-profiles" Jan 29 12:47:28 crc kubenswrapper[4840]: I0129 12:47:28.777433 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="97de4a9c-1ef2-428d-ba4d-7375e0219c93" containerName="collect-profiles" Jan 29 12:47:28 crc kubenswrapper[4840]: I0129 12:47:28.778791 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:28 crc kubenswrapper[4840]: I0129 12:47:28.790377 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gttsm"] Jan 29 12:47:28 crc kubenswrapper[4840]: I0129 12:47:28.929401 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-utilities\") pod \"certified-operators-gttsm\" (UID: \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\") " pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:28 crc kubenswrapper[4840]: I0129 12:47:28.929660 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-catalog-content\") pod \"certified-operators-gttsm\" (UID: \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\") " pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:28 crc kubenswrapper[4840]: I0129 12:47:28.929751 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mgtd\" (UniqueName: \"kubernetes.io/projected/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-kube-api-access-6mgtd\") pod \"certified-operators-gttsm\" (UID: \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\") " pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:29 crc kubenswrapper[4840]: I0129 12:47:29.031177 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-utilities\") pod \"certified-operators-gttsm\" (UID: \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\") " pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:29 crc kubenswrapper[4840]: I0129 12:47:29.031612 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-catalog-content\") pod \"certified-operators-gttsm\" (UID: \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\") " pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:29 crc kubenswrapper[4840]: I0129 12:47:29.031719 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mgtd\" (UniqueName: \"kubernetes.io/projected/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-kube-api-access-6mgtd\") pod \"certified-operators-gttsm\" (UID: \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\") " pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:29 crc kubenswrapper[4840]: I0129 12:47:29.032271 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-utilities\") pod \"certified-operators-gttsm\" (UID: \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\") " pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:29 crc kubenswrapper[4840]: I0129 12:47:29.032360 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-catalog-content\") pod \"certified-operators-gttsm\" (UID: \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\") " pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:29 crc kubenswrapper[4840]: I0129 12:47:29.061383 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mgtd\" (UniqueName: \"kubernetes.io/projected/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-kube-api-access-6mgtd\") pod \"certified-operators-gttsm\" (UID: \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\") " pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:29 crc kubenswrapper[4840]: I0129 12:47:29.101183 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:29 crc kubenswrapper[4840]: I0129 12:47:29.613148 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gttsm"] Jan 29 12:47:30 crc kubenswrapper[4840]: I0129 12:47:30.553291 4840 generic.go:334] "Generic (PLEG): container finished" podID="cb961e69-dbd4-4f54-8457-bb0f6a8900f3" containerID="6f04757873dc0d1e363130eb11330170ee3e6a8af08d14c5469daf5bce09198f" exitCode=0 Jan 29 12:47:30 crc kubenswrapper[4840]: I0129 12:47:30.553361 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gttsm" event={"ID":"cb961e69-dbd4-4f54-8457-bb0f6a8900f3","Type":"ContainerDied","Data":"6f04757873dc0d1e363130eb11330170ee3e6a8af08d14c5469daf5bce09198f"} Jan 29 12:47:30 crc kubenswrapper[4840]: I0129 12:47:30.553706 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gttsm" event={"ID":"cb961e69-dbd4-4f54-8457-bb0f6a8900f3","Type":"ContainerStarted","Data":"eccfd961ad67e2cf4f2b72ee9a3005ef71e62e557a9a00db6f4186bd643a0db5"} Jan 29 12:47:30 crc kubenswrapper[4840]: I0129 12:47:30.557901 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:47:31 crc kubenswrapper[4840]: I0129 12:47:31.563759 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gttsm" event={"ID":"cb961e69-dbd4-4f54-8457-bb0f6a8900f3","Type":"ContainerStarted","Data":"abd3c72023df93ed0c29cccd0db72fc581f19db32d8ab3e8a2a343be07564043"} Jan 29 12:47:32 crc kubenswrapper[4840]: I0129 12:47:32.573092 4840 generic.go:334] "Generic (PLEG): container finished" podID="cb961e69-dbd4-4f54-8457-bb0f6a8900f3" containerID="abd3c72023df93ed0c29cccd0db72fc581f19db32d8ab3e8a2a343be07564043" exitCode=0 Jan 29 12:47:32 crc kubenswrapper[4840]: I0129 12:47:32.573141 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gttsm" event={"ID":"cb961e69-dbd4-4f54-8457-bb0f6a8900f3","Type":"ContainerDied","Data":"abd3c72023df93ed0c29cccd0db72fc581f19db32d8ab3e8a2a343be07564043"} Jan 29 12:47:33 crc kubenswrapper[4840]: I0129 12:47:33.583005 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gttsm" event={"ID":"cb961e69-dbd4-4f54-8457-bb0f6a8900f3","Type":"ContainerStarted","Data":"f316f6d869a97105123065f9bef40df2e3b7f08f4f8bf08a92b56b20aec03e6c"} Jan 29 12:47:33 crc kubenswrapper[4840]: I0129 12:47:33.625451 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gttsm" podStartSLOduration=3.007714101 podStartE2EDuration="5.62538024s" podCreationTimestamp="2026-01-29 12:47:28 +0000 UTC" firstStartedPulling="2026-01-29 12:47:30.55765917 +0000 UTC m=+2582.220639063" lastFinishedPulling="2026-01-29 12:47:33.175325309 +0000 UTC m=+2584.838305202" observedRunningTime="2026-01-29 12:47:33.598180511 +0000 UTC m=+2585.261160414" watchObservedRunningTime="2026-01-29 12:47:33.62538024 +0000 UTC m=+2585.288360123" Jan 29 12:47:39 crc kubenswrapper[4840]: I0129 12:47:39.101568 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:39 crc kubenswrapper[4840]: I0129 12:47:39.102401 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:39 crc kubenswrapper[4840]: I0129 12:47:39.169999 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:39 crc kubenswrapper[4840]: I0129 12:47:39.688052 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:39 crc kubenswrapper[4840]: I0129 12:47:39.747045 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gttsm"] Jan 29 12:47:41 crc kubenswrapper[4840]: I0129 12:47:41.652614 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gttsm" podUID="cb961e69-dbd4-4f54-8457-bb0f6a8900f3" containerName="registry-server" containerID="cri-o://f316f6d869a97105123065f9bef40df2e3b7f08f4f8bf08a92b56b20aec03e6c" gracePeriod=2 Jan 29 12:47:42 crc kubenswrapper[4840]: I0129 12:47:42.001749 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:47:42 crc kubenswrapper[4840]: E0129 12:47:42.002698 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:47:42 crc kubenswrapper[4840]: I0129 12:47:42.664687 4840 generic.go:334] "Generic (PLEG): container finished" podID="cb961e69-dbd4-4f54-8457-bb0f6a8900f3" containerID="f316f6d869a97105123065f9bef40df2e3b7f08f4f8bf08a92b56b20aec03e6c" exitCode=0 Jan 29 12:47:42 crc kubenswrapper[4840]: I0129 12:47:42.664749 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gttsm" event={"ID":"cb961e69-dbd4-4f54-8457-bb0f6a8900f3","Type":"ContainerDied","Data":"f316f6d869a97105123065f9bef40df2e3b7f08f4f8bf08a92b56b20aec03e6c"} Jan 29 12:47:42 crc kubenswrapper[4840]: I0129 12:47:42.742974 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:42 crc kubenswrapper[4840]: I0129 12:47:42.868025 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-catalog-content\") pod \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\" (UID: \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\") " Jan 29 12:47:42 crc kubenswrapper[4840]: I0129 12:47:42.868178 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mgtd\" (UniqueName: \"kubernetes.io/projected/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-kube-api-access-6mgtd\") pod \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\" (UID: \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\") " Jan 29 12:47:42 crc kubenswrapper[4840]: I0129 12:47:42.868240 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-utilities\") pod \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\" (UID: \"cb961e69-dbd4-4f54-8457-bb0f6a8900f3\") " Jan 29 12:47:42 crc kubenswrapper[4840]: I0129 12:47:42.869830 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-utilities" (OuterVolumeSpecName: "utilities") pod "cb961e69-dbd4-4f54-8457-bb0f6a8900f3" (UID: "cb961e69-dbd4-4f54-8457-bb0f6a8900f3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:47:42 crc kubenswrapper[4840]: I0129 12:47:42.879497 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-kube-api-access-6mgtd" (OuterVolumeSpecName: "kube-api-access-6mgtd") pod "cb961e69-dbd4-4f54-8457-bb0f6a8900f3" (UID: "cb961e69-dbd4-4f54-8457-bb0f6a8900f3"). InnerVolumeSpecName "kube-api-access-6mgtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:47:42 crc kubenswrapper[4840]: I0129 12:47:42.970035 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mgtd\" (UniqueName: \"kubernetes.io/projected/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-kube-api-access-6mgtd\") on node \"crc\" DevicePath \"\"" Jan 29 12:47:42 crc kubenswrapper[4840]: I0129 12:47:42.970091 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:47:43 crc kubenswrapper[4840]: I0129 12:47:43.680849 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gttsm" event={"ID":"cb961e69-dbd4-4f54-8457-bb0f6a8900f3","Type":"ContainerDied","Data":"eccfd961ad67e2cf4f2b72ee9a3005ef71e62e557a9a00db6f4186bd643a0db5"} Jan 29 12:47:43 crc kubenswrapper[4840]: I0129 12:47:43.681418 4840 scope.go:117] "RemoveContainer" containerID="f316f6d869a97105123065f9bef40df2e3b7f08f4f8bf08a92b56b20aec03e6c" Jan 29 12:47:43 crc kubenswrapper[4840]: I0129 12:47:43.681058 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gttsm" Jan 29 12:47:43 crc kubenswrapper[4840]: I0129 12:47:43.721050 4840 scope.go:117] "RemoveContainer" containerID="abd3c72023df93ed0c29cccd0db72fc581f19db32d8ab3e8a2a343be07564043" Jan 29 12:47:43 crc kubenswrapper[4840]: I0129 12:47:43.748898 4840 scope.go:117] "RemoveContainer" containerID="6f04757873dc0d1e363130eb11330170ee3e6a8af08d14c5469daf5bce09198f" Jan 29 12:47:44 crc kubenswrapper[4840]: I0129 12:47:44.866205 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb961e69-dbd4-4f54-8457-bb0f6a8900f3" (UID: "cb961e69-dbd4-4f54-8457-bb0f6a8900f3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:47:44 crc kubenswrapper[4840]: I0129 12:47:44.920776 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb961e69-dbd4-4f54-8457-bb0f6a8900f3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:47:44 crc kubenswrapper[4840]: I0129 12:47:44.925941 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gttsm"] Jan 29 12:47:44 crc kubenswrapper[4840]: I0129 12:47:44.933156 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gttsm"] Jan 29 12:47:45 crc kubenswrapper[4840]: I0129 12:47:45.013724 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb961e69-dbd4-4f54-8457-bb0f6a8900f3" path="/var/lib/kubelet/pods/cb961e69-dbd4-4f54-8457-bb0f6a8900f3/volumes" Jan 29 12:47:56 crc kubenswrapper[4840]: I0129 12:47:56.001731 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:47:56 crc kubenswrapper[4840]: I0129 12:47:56.810736 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"f1bc3c42fd768f598a10ff6a0dbd9462d5b2072505f933a5467b8f6ee3913930"} Jan 29 12:48:41 crc kubenswrapper[4840]: I0129 12:48:41.388015 4840 scope.go:117] "RemoveContainer" containerID="c2b30741e4d36142409a0a3a3e8af5eed138e883e15700ba8d137b6a6834a632" Jan 29 12:48:41 crc kubenswrapper[4840]: I0129 12:48:41.426727 4840 scope.go:117] "RemoveContainer" containerID="85fb9f1cbc8deaa5e26deaddaeb4ea40a0a383c748e0d75bb1da913b55d77dd1" Jan 29 12:48:41 crc kubenswrapper[4840]: I0129 12:48:41.446905 4840 scope.go:117] "RemoveContainer" containerID="d0b46dca8c56de80f720dcff573e22b23701e7284ee35ce6b2984c2146132c80" Jan 29 12:50:23 crc kubenswrapper[4840]: I0129 12:50:23.522287 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:50:23 crc kubenswrapper[4840]: I0129 12:50:23.523126 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:50:53 crc kubenswrapper[4840]: I0129 12:50:53.521727 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:50:53 crc kubenswrapper[4840]: I0129 12:50:53.523006 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.370851 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4jgbf"] Jan 29 12:51:10 crc kubenswrapper[4840]: E0129 12:51:10.374742 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb961e69-dbd4-4f54-8457-bb0f6a8900f3" containerName="extract-utilities" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.374873 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb961e69-dbd4-4f54-8457-bb0f6a8900f3" containerName="extract-utilities" Jan 29 12:51:10 crc kubenswrapper[4840]: E0129 12:51:10.374983 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb961e69-dbd4-4f54-8457-bb0f6a8900f3" containerName="extract-content" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.375072 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb961e69-dbd4-4f54-8457-bb0f6a8900f3" containerName="extract-content" Jan 29 12:51:10 crc kubenswrapper[4840]: E0129 12:51:10.375168 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb961e69-dbd4-4f54-8457-bb0f6a8900f3" containerName="registry-server" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.375251 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb961e69-dbd4-4f54-8457-bb0f6a8900f3" containerName="registry-server" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.375606 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb961e69-dbd4-4f54-8457-bb0f6a8900f3" containerName="registry-server" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.377105 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.392322 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4jgbf"] Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.484532 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cf6z\" (UniqueName: \"kubernetes.io/projected/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-kube-api-access-6cf6z\") pod \"redhat-operators-4jgbf\" (UID: \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\") " pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.485090 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-utilities\") pod \"redhat-operators-4jgbf\" (UID: \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\") " pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.485234 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-catalog-content\") pod \"redhat-operators-4jgbf\" (UID: \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\") " pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.587397 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-utilities\") pod \"redhat-operators-4jgbf\" (UID: \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\") " pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.587464 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-catalog-content\") pod \"redhat-operators-4jgbf\" (UID: \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\") " pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.587539 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cf6z\" (UniqueName: \"kubernetes.io/projected/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-kube-api-access-6cf6z\") pod \"redhat-operators-4jgbf\" (UID: \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\") " pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.588555 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-utilities\") pod \"redhat-operators-4jgbf\" (UID: \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\") " pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.588643 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-catalog-content\") pod \"redhat-operators-4jgbf\" (UID: \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\") " pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.610731 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cf6z\" (UniqueName: \"kubernetes.io/projected/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-kube-api-access-6cf6z\") pod \"redhat-operators-4jgbf\" (UID: \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\") " pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:10 crc kubenswrapper[4840]: I0129 12:51:10.699719 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:11 crc kubenswrapper[4840]: I0129 12:51:11.166844 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4jgbf"] Jan 29 12:51:11 crc kubenswrapper[4840]: I0129 12:51:11.398556 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jgbf" event={"ID":"f5fdf0ea-d7ca-4f78-894b-af609b1204c1","Type":"ContainerStarted","Data":"35ab6ac2e362391aca01d3aa7a765ad3987d485db9114036c32d6cdae1b53729"} Jan 29 12:51:12 crc kubenswrapper[4840]: I0129 12:51:12.408685 4840 generic.go:334] "Generic (PLEG): container finished" podID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" containerID="681758ae3b8373bb25a4c9d37d8815d2fff28d8a3bd0c97a6ad090fd898f4bcc" exitCode=0 Jan 29 12:51:12 crc kubenswrapper[4840]: I0129 12:51:12.408745 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jgbf" event={"ID":"f5fdf0ea-d7ca-4f78-894b-af609b1204c1","Type":"ContainerDied","Data":"681758ae3b8373bb25a4c9d37d8815d2fff28d8a3bd0c97a6ad090fd898f4bcc"} Jan 29 12:51:15 crc kubenswrapper[4840]: I0129 12:51:15.435521 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jgbf" event={"ID":"f5fdf0ea-d7ca-4f78-894b-af609b1204c1","Type":"ContainerStarted","Data":"d98fb5ad7d9a5d3fd349279627bcaedf6f7ba5f92a34ee0db2dac7aed65ccf63"} Jan 29 12:51:16 crc kubenswrapper[4840]: I0129 12:51:16.444957 4840 generic.go:334] "Generic (PLEG): container finished" podID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" containerID="d98fb5ad7d9a5d3fd349279627bcaedf6f7ba5f92a34ee0db2dac7aed65ccf63" exitCode=0 Jan 29 12:51:16 crc kubenswrapper[4840]: I0129 12:51:16.444978 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jgbf" event={"ID":"f5fdf0ea-d7ca-4f78-894b-af609b1204c1","Type":"ContainerDied","Data":"d98fb5ad7d9a5d3fd349279627bcaedf6f7ba5f92a34ee0db2dac7aed65ccf63"} Jan 29 12:51:18 crc kubenswrapper[4840]: I0129 12:51:18.464417 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jgbf" event={"ID":"f5fdf0ea-d7ca-4f78-894b-af609b1204c1","Type":"ContainerStarted","Data":"2f438720a617cfb8874cc94cd745b44d584a411f3d969457160c7a0af90bd630"} Jan 29 12:51:18 crc kubenswrapper[4840]: I0129 12:51:18.484223 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4jgbf" podStartSLOduration=2.803771624 podStartE2EDuration="8.484199657s" podCreationTimestamp="2026-01-29 12:51:10 +0000 UTC" firstStartedPulling="2026-01-29 12:51:12.411009582 +0000 UTC m=+2804.073989475" lastFinishedPulling="2026-01-29 12:51:18.091437615 +0000 UTC m=+2809.754417508" observedRunningTime="2026-01-29 12:51:18.481051942 +0000 UTC m=+2810.144031835" watchObservedRunningTime="2026-01-29 12:51:18.484199657 +0000 UTC m=+2810.147179550" Jan 29 12:51:20 crc kubenswrapper[4840]: I0129 12:51:20.701782 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:20 crc kubenswrapper[4840]: I0129 12:51:20.702996 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:21 crc kubenswrapper[4840]: I0129 12:51:21.755626 4840 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4jgbf" podUID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" containerName="registry-server" probeResult="failure" output=< Jan 29 12:51:21 crc kubenswrapper[4840]: timeout: failed to connect service ":50051" within 1s Jan 29 12:51:21 crc kubenswrapper[4840]: > Jan 29 12:51:23 crc kubenswrapper[4840]: I0129 12:51:23.523802 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:51:23 crc kubenswrapper[4840]: I0129 12:51:23.524380 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:51:23 crc kubenswrapper[4840]: I0129 12:51:23.524470 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:51:23 crc kubenswrapper[4840]: I0129 12:51:23.525627 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f1bc3c42fd768f598a10ff6a0dbd9462d5b2072505f933a5467b8f6ee3913930"} pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:51:23 crc kubenswrapper[4840]: I0129 12:51:23.525734 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" containerID="cri-o://f1bc3c42fd768f598a10ff6a0dbd9462d5b2072505f933a5467b8f6ee3913930" gracePeriod=600 Jan 29 12:51:24 crc kubenswrapper[4840]: I0129 12:51:24.509272 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerID="f1bc3c42fd768f598a10ff6a0dbd9462d5b2072505f933a5467b8f6ee3913930" exitCode=0 Jan 29 12:51:24 crc kubenswrapper[4840]: I0129 12:51:24.509329 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerDied","Data":"f1bc3c42fd768f598a10ff6a0dbd9462d5b2072505f933a5467b8f6ee3913930"} Jan 29 12:51:24 crc kubenswrapper[4840]: I0129 12:51:24.509416 4840 scope.go:117] "RemoveContainer" containerID="2881fd5187161c24608d20f59e7b83b354c9f105774358014071fbcd98d2d62c" Jan 29 12:51:26 crc kubenswrapper[4840]: I0129 12:51:26.535439 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951"} Jan 29 12:51:30 crc kubenswrapper[4840]: I0129 12:51:30.757065 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:30 crc kubenswrapper[4840]: I0129 12:51:30.805417 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:30 crc kubenswrapper[4840]: I0129 12:51:30.991501 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4jgbf"] Jan 29 12:51:32 crc kubenswrapper[4840]: I0129 12:51:32.574532 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4jgbf" podUID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" containerName="registry-server" containerID="cri-o://2f438720a617cfb8874cc94cd745b44d584a411f3d969457160c7a0af90bd630" gracePeriod=2 Jan 29 12:51:33 crc kubenswrapper[4840]: I0129 12:51:33.583286 4840 generic.go:334] "Generic (PLEG): container finished" podID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" containerID="2f438720a617cfb8874cc94cd745b44d584a411f3d969457160c7a0af90bd630" exitCode=0 Jan 29 12:51:33 crc kubenswrapper[4840]: I0129 12:51:33.583368 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jgbf" event={"ID":"f5fdf0ea-d7ca-4f78-894b-af609b1204c1","Type":"ContainerDied","Data":"2f438720a617cfb8874cc94cd745b44d584a411f3d969457160c7a0af90bd630"} Jan 29 12:51:33 crc kubenswrapper[4840]: I0129 12:51:33.650701 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:33 crc kubenswrapper[4840]: I0129 12:51:33.680604 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cf6z\" (UniqueName: \"kubernetes.io/projected/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-kube-api-access-6cf6z\") pod \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\" (UID: \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\") " Jan 29 12:51:33 crc kubenswrapper[4840]: I0129 12:51:33.680700 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-catalog-content\") pod \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\" (UID: \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\") " Jan 29 12:51:33 crc kubenswrapper[4840]: I0129 12:51:33.680786 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-utilities\") pod \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\" (UID: \"f5fdf0ea-d7ca-4f78-894b-af609b1204c1\") " Jan 29 12:51:33 crc kubenswrapper[4840]: I0129 12:51:33.681784 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-utilities" (OuterVolumeSpecName: "utilities") pod "f5fdf0ea-d7ca-4f78-894b-af609b1204c1" (UID: "f5fdf0ea-d7ca-4f78-894b-af609b1204c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:51:33 crc kubenswrapper[4840]: I0129 12:51:33.688684 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-kube-api-access-6cf6z" (OuterVolumeSpecName: "kube-api-access-6cf6z") pod "f5fdf0ea-d7ca-4f78-894b-af609b1204c1" (UID: "f5fdf0ea-d7ca-4f78-894b-af609b1204c1"). InnerVolumeSpecName "kube-api-access-6cf6z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:51:33 crc kubenswrapper[4840]: I0129 12:51:33.783223 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cf6z\" (UniqueName: \"kubernetes.io/projected/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-kube-api-access-6cf6z\") on node \"crc\" DevicePath \"\"" Jan 29 12:51:33 crc kubenswrapper[4840]: I0129 12:51:33.783547 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:51:33 crc kubenswrapper[4840]: I0129 12:51:33.805931 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f5fdf0ea-d7ca-4f78-894b-af609b1204c1" (UID: "f5fdf0ea-d7ca-4f78-894b-af609b1204c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:51:33 crc kubenswrapper[4840]: I0129 12:51:33.885429 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f5fdf0ea-d7ca-4f78-894b-af609b1204c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:51:34 crc kubenswrapper[4840]: I0129 12:51:34.593310 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4jgbf" event={"ID":"f5fdf0ea-d7ca-4f78-894b-af609b1204c1","Type":"ContainerDied","Data":"35ab6ac2e362391aca01d3aa7a765ad3987d485db9114036c32d6cdae1b53729"} Jan 29 12:51:34 crc kubenswrapper[4840]: I0129 12:51:34.593408 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4jgbf" Jan 29 12:51:34 crc kubenswrapper[4840]: I0129 12:51:34.593937 4840 scope.go:117] "RemoveContainer" containerID="2f438720a617cfb8874cc94cd745b44d584a411f3d969457160c7a0af90bd630" Jan 29 12:51:34 crc kubenswrapper[4840]: I0129 12:51:34.621663 4840 scope.go:117] "RemoveContainer" containerID="d98fb5ad7d9a5d3fd349279627bcaedf6f7ba5f92a34ee0db2dac7aed65ccf63" Jan 29 12:51:34 crc kubenswrapper[4840]: I0129 12:51:34.635824 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4jgbf"] Jan 29 12:51:34 crc kubenswrapper[4840]: I0129 12:51:34.653814 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4jgbf"] Jan 29 12:51:34 crc kubenswrapper[4840]: I0129 12:51:34.678741 4840 scope.go:117] "RemoveContainer" containerID="681758ae3b8373bb25a4c9d37d8815d2fff28d8a3bd0c97a6ad090fd898f4bcc" Jan 29 12:51:35 crc kubenswrapper[4840]: I0129 12:51:35.013241 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" path="/var/lib/kubelet/pods/f5fdf0ea-d7ca-4f78-894b-af609b1204c1/volumes" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.230507 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ptm6m"] Jan 29 12:51:50 crc kubenswrapper[4840]: E0129 12:51:50.231640 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" containerName="extract-utilities" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.231656 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" containerName="extract-utilities" Jan 29 12:51:50 crc kubenswrapper[4840]: E0129 12:51:50.231674 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" containerName="extract-content" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.231683 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" containerName="extract-content" Jan 29 12:51:50 crc kubenswrapper[4840]: E0129 12:51:50.231701 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" containerName="registry-server" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.231707 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" containerName="registry-server" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.231880 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5fdf0ea-d7ca-4f78-894b-af609b1204c1" containerName="registry-server" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.233125 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.244556 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptm6m"] Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.361203 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwttl\" (UniqueName: \"kubernetes.io/projected/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-kube-api-access-dwttl\") pod \"community-operators-ptm6m\" (UID: \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\") " pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.361320 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-catalog-content\") pod \"community-operators-ptm6m\" (UID: \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\") " pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.361350 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-utilities\") pod \"community-operators-ptm6m\" (UID: \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\") " pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.462974 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-catalog-content\") pod \"community-operators-ptm6m\" (UID: \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\") " pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.463041 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-utilities\") pod \"community-operators-ptm6m\" (UID: \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\") " pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.463122 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwttl\" (UniqueName: \"kubernetes.io/projected/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-kube-api-access-dwttl\") pod \"community-operators-ptm6m\" (UID: \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\") " pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.463727 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-catalog-content\") pod \"community-operators-ptm6m\" (UID: \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\") " pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.463791 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-utilities\") pod \"community-operators-ptm6m\" (UID: \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\") " pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.489619 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwttl\" (UniqueName: \"kubernetes.io/projected/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-kube-api-access-dwttl\") pod \"community-operators-ptm6m\" (UID: \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\") " pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:51:50 crc kubenswrapper[4840]: I0129 12:51:50.571188 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:51:51 crc kubenswrapper[4840]: I0129 12:51:51.082651 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ptm6m"] Jan 29 12:51:51 crc kubenswrapper[4840]: I0129 12:51:51.737163 4840 generic.go:334] "Generic (PLEG): container finished" podID="45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" containerID="bbcf356efde073787d66d2844ad78150d682d1a8a8e55b2fa10d7ae903463c28" exitCode=0 Jan 29 12:51:51 crc kubenswrapper[4840]: I0129 12:51:51.737214 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptm6m" event={"ID":"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5","Type":"ContainerDied","Data":"bbcf356efde073787d66d2844ad78150d682d1a8a8e55b2fa10d7ae903463c28"} Jan 29 12:51:51 crc kubenswrapper[4840]: I0129 12:51:51.737608 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptm6m" event={"ID":"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5","Type":"ContainerStarted","Data":"596a7ad919165fd04c69f4d08ef666910a3b262898ba98fd037465c1a391362d"} Jan 29 12:51:53 crc kubenswrapper[4840]: I0129 12:51:53.754429 4840 generic.go:334] "Generic (PLEG): container finished" podID="45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" containerID="acdb0a8fce0be19a4653cdefb8bac1759dc647e4b71539ec582c972e9d56b5e1" exitCode=0 Jan 29 12:51:53 crc kubenswrapper[4840]: I0129 12:51:53.754537 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptm6m" event={"ID":"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5","Type":"ContainerDied","Data":"acdb0a8fce0be19a4653cdefb8bac1759dc647e4b71539ec582c972e9d56b5e1"} Jan 29 12:51:55 crc kubenswrapper[4840]: I0129 12:51:55.771381 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptm6m" event={"ID":"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5","Type":"ContainerStarted","Data":"d08c7216a2a0cca2d1957e2a8d692fff2ff160e0fe14ea1636655e5ab3cb3f7c"} Jan 29 12:51:55 crc kubenswrapper[4840]: I0129 12:51:55.791326 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ptm6m" podStartSLOduration=2.649723174 podStartE2EDuration="5.791306913s" podCreationTimestamp="2026-01-29 12:51:50 +0000 UTC" firstStartedPulling="2026-01-29 12:51:51.738933811 +0000 UTC m=+2843.401913724" lastFinishedPulling="2026-01-29 12:51:54.88051757 +0000 UTC m=+2846.543497463" observedRunningTime="2026-01-29 12:51:55.790626725 +0000 UTC m=+2847.453606618" watchObservedRunningTime="2026-01-29 12:51:55.791306913 +0000 UTC m=+2847.454286806" Jan 29 12:52:00 crc kubenswrapper[4840]: I0129 12:52:00.572536 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:52:00 crc kubenswrapper[4840]: I0129 12:52:00.573167 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:52:00 crc kubenswrapper[4840]: I0129 12:52:00.636463 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:52:00 crc kubenswrapper[4840]: I0129 12:52:00.878451 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:52:00 crc kubenswrapper[4840]: I0129 12:52:00.932788 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptm6m"] Jan 29 12:52:02 crc kubenswrapper[4840]: I0129 12:52:02.842043 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ptm6m" podUID="45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" containerName="registry-server" containerID="cri-o://d08c7216a2a0cca2d1957e2a8d692fff2ff160e0fe14ea1636655e5ab3cb3f7c" gracePeriod=2 Jan 29 12:52:03 crc kubenswrapper[4840]: I0129 12:52:03.852346 4840 generic.go:334] "Generic (PLEG): container finished" podID="45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" containerID="d08c7216a2a0cca2d1957e2a8d692fff2ff160e0fe14ea1636655e5ab3cb3f7c" exitCode=0 Jan 29 12:52:03 crc kubenswrapper[4840]: I0129 12:52:03.852455 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptm6m" event={"ID":"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5","Type":"ContainerDied","Data":"d08c7216a2a0cca2d1957e2a8d692fff2ff160e0fe14ea1636655e5ab3cb3f7c"} Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.170232 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.286772 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-utilities\") pod \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\" (UID: \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\") " Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.286853 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-catalog-content\") pod \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\" (UID: \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\") " Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.286895 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwttl\" (UniqueName: \"kubernetes.io/projected/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-kube-api-access-dwttl\") pod \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\" (UID: \"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5\") " Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.287769 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-utilities" (OuterVolumeSpecName: "utilities") pod "45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" (UID: "45f4dfe8-d4a2-447d-a251-3765fa8cbbc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.325685 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-kube-api-access-dwttl" (OuterVolumeSpecName: "kube-api-access-dwttl") pod "45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" (UID: "45f4dfe8-d4a2-447d-a251-3765fa8cbbc5"). InnerVolumeSpecName "kube-api-access-dwttl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.388625 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.388671 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwttl\" (UniqueName: \"kubernetes.io/projected/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-kube-api-access-dwttl\") on node \"crc\" DevicePath \"\"" Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.861779 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ptm6m" event={"ID":"45f4dfe8-d4a2-447d-a251-3765fa8cbbc5","Type":"ContainerDied","Data":"596a7ad919165fd04c69f4d08ef666910a3b262898ba98fd037465c1a391362d"} Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.861852 4840 scope.go:117] "RemoveContainer" containerID="d08c7216a2a0cca2d1957e2a8d692fff2ff160e0fe14ea1636655e5ab3cb3f7c" Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.861862 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ptm6m" Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.879110 4840 scope.go:117] "RemoveContainer" containerID="acdb0a8fce0be19a4653cdefb8bac1759dc647e4b71539ec582c972e9d56b5e1" Jan 29 12:52:04 crc kubenswrapper[4840]: I0129 12:52:04.900873 4840 scope.go:117] "RemoveContainer" containerID="bbcf356efde073787d66d2844ad78150d682d1a8a8e55b2fa10d7ae903463c28" Jan 29 12:52:06 crc kubenswrapper[4840]: I0129 12:52:06.242155 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" (UID: "45f4dfe8-d4a2-447d-a251-3765fa8cbbc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:52:06 crc kubenswrapper[4840]: I0129 12:52:06.322053 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:52:06 crc kubenswrapper[4840]: I0129 12:52:06.400209 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ptm6m"] Jan 29 12:52:06 crc kubenswrapper[4840]: I0129 12:52:06.407589 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ptm6m"] Jan 29 12:52:07 crc kubenswrapper[4840]: I0129 12:52:07.013052 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" path="/var/lib/kubelet/pods/45f4dfe8-d4a2-447d-a251-3765fa8cbbc5/volumes" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.141585 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-78rwl/must-gather-zjfhh"] Jan 29 12:53:14 crc kubenswrapper[4840]: E0129 12:53:14.142791 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" containerName="extract-utilities" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.142812 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" containerName="extract-utilities" Jan 29 12:53:14 crc kubenswrapper[4840]: E0129 12:53:14.142834 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" containerName="extract-content" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.142842 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" containerName="extract-content" Jan 29 12:53:14 crc kubenswrapper[4840]: E0129 12:53:14.142878 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" containerName="registry-server" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.142887 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" containerName="registry-server" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.143069 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f4dfe8-d4a2-447d-a251-3765fa8cbbc5" containerName="registry-server" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.144167 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78rwl/must-gather-zjfhh" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.148296 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-78rwl"/"default-dockercfg-mpd5t" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.148442 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-78rwl"/"openshift-service-ca.crt" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.152967 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-78rwl"/"kube-root-ca.crt" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.177449 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-78rwl/must-gather-zjfhh"] Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.286784 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtvqf\" (UniqueName: \"kubernetes.io/projected/111aa409-55e1-4791-8ee0-574fc225780a-kube-api-access-rtvqf\") pod \"must-gather-zjfhh\" (UID: \"111aa409-55e1-4791-8ee0-574fc225780a\") " pod="openshift-must-gather-78rwl/must-gather-zjfhh" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.286919 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/111aa409-55e1-4791-8ee0-574fc225780a-must-gather-output\") pod \"must-gather-zjfhh\" (UID: \"111aa409-55e1-4791-8ee0-574fc225780a\") " pod="openshift-must-gather-78rwl/must-gather-zjfhh" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.388706 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtvqf\" (UniqueName: \"kubernetes.io/projected/111aa409-55e1-4791-8ee0-574fc225780a-kube-api-access-rtvqf\") pod \"must-gather-zjfhh\" (UID: \"111aa409-55e1-4791-8ee0-574fc225780a\") " pod="openshift-must-gather-78rwl/must-gather-zjfhh" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.388777 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/111aa409-55e1-4791-8ee0-574fc225780a-must-gather-output\") pod \"must-gather-zjfhh\" (UID: \"111aa409-55e1-4791-8ee0-574fc225780a\") " pod="openshift-must-gather-78rwl/must-gather-zjfhh" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.389210 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/111aa409-55e1-4791-8ee0-574fc225780a-must-gather-output\") pod \"must-gather-zjfhh\" (UID: \"111aa409-55e1-4791-8ee0-574fc225780a\") " pod="openshift-must-gather-78rwl/must-gather-zjfhh" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.410266 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtvqf\" (UniqueName: \"kubernetes.io/projected/111aa409-55e1-4791-8ee0-574fc225780a-kube-api-access-rtvqf\") pod \"must-gather-zjfhh\" (UID: \"111aa409-55e1-4791-8ee0-574fc225780a\") " pod="openshift-must-gather-78rwl/must-gather-zjfhh" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.466709 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78rwl/must-gather-zjfhh" Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.904360 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-78rwl/must-gather-zjfhh"] Jan 29 12:53:14 crc kubenswrapper[4840]: I0129 12:53:14.923057 4840 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 29 12:53:15 crc kubenswrapper[4840]: I0129 12:53:15.428407 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78rwl/must-gather-zjfhh" event={"ID":"111aa409-55e1-4791-8ee0-574fc225780a","Type":"ContainerStarted","Data":"3680ddee41db1f64846a67107ef38b793d7825179a7db2ac79ace75fe47f316d"} Jan 29 12:53:21 crc kubenswrapper[4840]: I0129 12:53:21.481071 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78rwl/must-gather-zjfhh" event={"ID":"111aa409-55e1-4791-8ee0-574fc225780a","Type":"ContainerStarted","Data":"74c343e145bc44eb6e2c6810cc38157ef97d6bf9d74c87dec620953ed09fc459"} Jan 29 12:53:21 crc kubenswrapper[4840]: I0129 12:53:21.481882 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78rwl/must-gather-zjfhh" event={"ID":"111aa409-55e1-4791-8ee0-574fc225780a","Type":"ContainerStarted","Data":"1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6"} Jan 29 12:53:21 crc kubenswrapper[4840]: I0129 12:53:21.498193 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-78rwl/must-gather-zjfhh" podStartSLOduration=1.357937438 podStartE2EDuration="7.498168228s" podCreationTimestamp="2026-01-29 12:53:14 +0000 UTC" firstStartedPulling="2026-01-29 12:53:14.922790683 +0000 UTC m=+2926.585770576" lastFinishedPulling="2026-01-29 12:53:21.063021473 +0000 UTC m=+2932.726001366" observedRunningTime="2026-01-29 12:53:21.492334349 +0000 UTC m=+2933.155314262" watchObservedRunningTime="2026-01-29 12:53:21.498168228 +0000 UTC m=+2933.161148121" Jan 29 12:53:53 crc kubenswrapper[4840]: I0129 12:53:53.521843 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:53:53 crc kubenswrapper[4840]: I0129 12:53:53.522521 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:54:15 crc kubenswrapper[4840]: I0129 12:54:15.931474 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6pkfg"] Jan 29 12:54:15 crc kubenswrapper[4840]: I0129 12:54:15.933573 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:15 crc kubenswrapper[4840]: I0129 12:54:15.957069 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pkfg"] Jan 29 12:54:16 crc kubenswrapper[4840]: I0129 12:54:16.090342 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-utilities\") pod \"redhat-marketplace-6pkfg\" (UID: \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\") " pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:16 crc kubenswrapper[4840]: I0129 12:54:16.090399 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt6fn\" (UniqueName: \"kubernetes.io/projected/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-kube-api-access-wt6fn\") pod \"redhat-marketplace-6pkfg\" (UID: \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\") " pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:16 crc kubenswrapper[4840]: I0129 12:54:16.090448 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-catalog-content\") pod \"redhat-marketplace-6pkfg\" (UID: \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\") " pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:16 crc kubenswrapper[4840]: I0129 12:54:16.191833 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-utilities\") pod \"redhat-marketplace-6pkfg\" (UID: \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\") " pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:16 crc kubenswrapper[4840]: I0129 12:54:16.192561 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt6fn\" (UniqueName: \"kubernetes.io/projected/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-kube-api-access-wt6fn\") pod \"redhat-marketplace-6pkfg\" (UID: \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\") " pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:16 crc kubenswrapper[4840]: I0129 12:54:16.192493 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-utilities\") pod \"redhat-marketplace-6pkfg\" (UID: \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\") " pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:16 crc kubenswrapper[4840]: I0129 12:54:16.192645 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-catalog-content\") pod \"redhat-marketplace-6pkfg\" (UID: \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\") " pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:16 crc kubenswrapper[4840]: I0129 12:54:16.192984 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-catalog-content\") pod \"redhat-marketplace-6pkfg\" (UID: \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\") " pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:16 crc kubenswrapper[4840]: I0129 12:54:16.241714 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt6fn\" (UniqueName: \"kubernetes.io/projected/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-kube-api-access-wt6fn\") pod \"redhat-marketplace-6pkfg\" (UID: \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\") " pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:16 crc kubenswrapper[4840]: I0129 12:54:16.253414 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:16 crc kubenswrapper[4840]: I0129 12:54:16.854803 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pkfg"] Jan 29 12:54:16 crc kubenswrapper[4840]: I0129 12:54:16.893071 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pkfg" event={"ID":"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0","Type":"ContainerStarted","Data":"508fe0b8c0239a0e418a0cdf69f765dee85fdd371b545ab8e0ad568758a0f107"} Jan 29 12:54:17 crc kubenswrapper[4840]: I0129 12:54:17.900265 4840 generic.go:334] "Generic (PLEG): container finished" podID="525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" containerID="600ecfb46d1b7c5dc39ec6b77d78b6f368b75a9476cfa3a7daf95db4f3f52112" exitCode=0 Jan 29 12:54:17 crc kubenswrapper[4840]: I0129 12:54:17.900330 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pkfg" event={"ID":"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0","Type":"ContainerDied","Data":"600ecfb46d1b7c5dc39ec6b77d78b6f368b75a9476cfa3a7daf95db4f3f52112"} Jan 29 12:54:18 crc kubenswrapper[4840]: I0129 12:54:18.910853 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pkfg" event={"ID":"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0","Type":"ContainerStarted","Data":"e6cd8aedb56d439e08226fb81c000493cfea26b579727a804f247086692ca0ed"} Jan 29 12:54:19 crc kubenswrapper[4840]: I0129 12:54:19.920832 4840 generic.go:334] "Generic (PLEG): container finished" podID="525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" containerID="e6cd8aedb56d439e08226fb81c000493cfea26b579727a804f247086692ca0ed" exitCode=0 Jan 29 12:54:19 crc kubenswrapper[4840]: I0129 12:54:19.920936 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pkfg" event={"ID":"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0","Type":"ContainerDied","Data":"e6cd8aedb56d439e08226fb81c000493cfea26b579727a804f247086692ca0ed"} Jan 29 12:54:20 crc kubenswrapper[4840]: I0129 12:54:20.932483 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pkfg" event={"ID":"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0","Type":"ContainerStarted","Data":"b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98"} Jan 29 12:54:20 crc kubenswrapper[4840]: I0129 12:54:20.955492 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6pkfg" podStartSLOduration=3.288225003 podStartE2EDuration="5.95547236s" podCreationTimestamp="2026-01-29 12:54:15 +0000 UTC" firstStartedPulling="2026-01-29 12:54:17.90168127 +0000 UTC m=+2989.564661163" lastFinishedPulling="2026-01-29 12:54:20.568928627 +0000 UTC m=+2992.231908520" observedRunningTime="2026-01-29 12:54:20.95433671 +0000 UTC m=+2992.617316623" watchObservedRunningTime="2026-01-29 12:54:20.95547236 +0000 UTC m=+2992.618452243" Jan 29 12:54:23 crc kubenswrapper[4840]: I0129 12:54:23.521736 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:54:23 crc kubenswrapper[4840]: I0129 12:54:23.521791 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:54:24 crc kubenswrapper[4840]: I0129 12:54:24.971198 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-657667746d-k2jnt_7f388a54-98a3-410d-a742-c6e4501b70e0/manager/0.log" Jan 29 12:54:25 crc kubenswrapper[4840]: I0129 12:54:25.170815 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql_1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f/util/0.log" Jan 29 12:54:25 crc kubenswrapper[4840]: I0129 12:54:25.361660 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql_1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f/util/0.log" Jan 29 12:54:25 crc kubenswrapper[4840]: I0129 12:54:25.373607 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql_1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f/pull/0.log" Jan 29 12:54:25 crc kubenswrapper[4840]: I0129 12:54:25.383365 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql_1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f/pull/0.log" Jan 29 12:54:25 crc kubenswrapper[4840]: I0129 12:54:25.544137 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql_1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f/util/0.log" Jan 29 12:54:25 crc kubenswrapper[4840]: I0129 12:54:25.548862 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql_1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f/pull/0.log" Jan 29 12:54:25 crc kubenswrapper[4840]: I0129 12:54:25.587241 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_c4e852f88ee0550736c43fc6f039704d50126811cd6410eb28002ff66f5q2ql_1f6f4efa-934e-4ce1-b38a-ad7d0f6da89f/extract/0.log" Jan 29 12:54:25 crc kubenswrapper[4840]: I0129 12:54:25.729053 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7595cf584-blgz5_8a01e8b6-1daa-48ba-98e1-1c99f6c4f5e0/manager/0.log" Jan 29 12:54:25 crc kubenswrapper[4840]: I0129 12:54:25.814080 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-55d5d5f8ff-xkk5v_a2708669-810e-4c1f-8843-eb738dfec7e9/manager/0.log" Jan 29 12:54:25 crc kubenswrapper[4840]: I0129 12:54:25.933695 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-6db5dbd896-rxp7k_ef7c6b60-290c-4ff4-b3df-b9bebd38cd07/manager/0.log" Jan 29 12:54:26 crc kubenswrapper[4840]: I0129 12:54:26.019144 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5499bccc75-cbdm7_66a68829-6bd5-4912-9ee1-532cfd70df6e/manager/0.log" Jan 29 12:54:26 crc kubenswrapper[4840]: I0129 12:54:26.115410 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-q5m62_2dc9b4aa-090d-43e7-b7d0-060d29ae213b/manager/0.log" Jan 29 12:54:26 crc kubenswrapper[4840]: I0129 12:54:26.211191 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-wcq6w_d19fb200-1d61-4334-b6aa-7b45c8b79502/manager/0.log" Jan 29 12:54:26 crc kubenswrapper[4840]: I0129 12:54:26.254110 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:26 crc kubenswrapper[4840]: I0129 12:54:26.254386 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:26 crc kubenswrapper[4840]: I0129 12:54:26.305907 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:26 crc kubenswrapper[4840]: I0129 12:54:26.308335 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-56cb7c4b4c-m46ns_7671ead5-a005-40a7-b132-adda6935e9b8/manager/0.log" Jan 29 12:54:26 crc kubenswrapper[4840]: I0129 12:54:26.420310 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-77bb7ffb8c-qht99_bfedf099-a451-4d2f-b473-4c7756870b55/manager/0.log" Jan 29 12:54:26 crc kubenswrapper[4840]: I0129 12:54:26.514827 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6475bdcbc4-tf2sq_6a18af7c-6e6a-495b-88a7-7c44230a5ead/manager/0.log" Jan 29 12:54:26 crc kubenswrapper[4840]: I0129 12:54:26.637029 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-k7s4n_3928ea1f-3fa6-43b2-abe9-1f0554100f8c/manager/0.log" Jan 29 12:54:26 crc kubenswrapper[4840]: I0129 12:54:26.699142 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-55df775b69-7lcnz_06f49550-e51b-4284-b192-ff5829c2bfaf/manager/0.log" Jan 29 12:54:27 crc kubenswrapper[4840]: I0129 12:54:27.034491 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:27 crc kubenswrapper[4840]: I0129 12:54:27.085627 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5ccd5b7f8f-c6hc5_9d94bfa3-811b-4590-b407-220d5f249477/manager/0.log" Jan 29 12:54:27 crc kubenswrapper[4840]: I0129 12:54:27.093153 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pkfg"] Jan 29 12:54:27 crc kubenswrapper[4840]: I0129 12:54:27.163164 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6b855b4fc4-mrvgg_e6ee0757-467e-4f45-a612-0c6db645276b/manager/0.log" Jan 29 12:54:27 crc kubenswrapper[4840]: I0129 12:54:27.278234 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dxf7kg_dceb14fc-bfee-481f-82dc-c0f60d39f650/manager/0.log" Jan 29 12:54:27 crc kubenswrapper[4840]: I0129 12:54:27.443453 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-59c8666fb5-rksqv_a1c64bd2-0b59-4e9e-aab6-16836c7b117e/operator/0.log" Jan 29 12:54:27 crc kubenswrapper[4840]: I0129 12:54:27.546267 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-65dc8f5954-5vmnj_201f6c04-44f6-4d28-bb64-b6f99c322f55/manager/0.log" Jan 29 12:54:27 crc kubenswrapper[4840]: I0129 12:54:27.641350 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-gg8jj_fffc9f72-ccc3-41d2-b10d-60465c7b85e7/registry-server/0.log" Jan 29 12:54:27 crc kubenswrapper[4840]: I0129 12:54:27.758976 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-2gldt_3ea9bf9d-883b-4e69-9ae6-6354a4004de3/manager/0.log" Jan 29 12:54:27 crc kubenswrapper[4840]: I0129 12:54:27.832583 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-znb4m_7484cebd-11a3-4740-9791-dd3949b8dcaa/manager/0.log" Jan 29 12:54:27 crc kubenswrapper[4840]: I0129 12:54:27.916327 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-lm2m4_bfc51868-5269-40cb-b17f-054309802b44/operator/0.log" Jan 29 12:54:28 crc kubenswrapper[4840]: I0129 12:54:28.061770 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-6f7455757b-qgrxt_8c034357-fd7c-4ff7-9b61-5a8491a6b34a/manager/0.log" Jan 29 12:54:28 crc kubenswrapper[4840]: I0129 12:54:28.097509 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-c95fd9dc5-zjkxq_7dca7ac4-5015-46ef-aa82-15fea812cca2/manager/0.log" Jan 29 12:54:28 crc kubenswrapper[4840]: I0129 12:54:28.283997 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-ktps6_bc781e4b-94bd-4575-ba55-4fbbe3a1a8d5/manager/0.log" Jan 29 12:54:28 crc kubenswrapper[4840]: I0129 12:54:28.342049 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-56b5dc77fd-7lx5t_882d0d06-c72a-46d5-985b-578094eedc4c/manager/0.log" Jan 29 12:54:28 crc kubenswrapper[4840]: I0129 12:54:28.987203 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6pkfg" podUID="525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" containerName="registry-server" containerID="cri-o://b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98" gracePeriod=2 Jan 29 12:54:29 crc kubenswrapper[4840]: I0129 12:54:29.917100 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:29 crc kubenswrapper[4840]: I0129 12:54:29.997339 4840 generic.go:334] "Generic (PLEG): container finished" podID="525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" containerID="b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98" exitCode=0 Jan 29 12:54:29 crc kubenswrapper[4840]: I0129 12:54:29.997393 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pkfg" event={"ID":"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0","Type":"ContainerDied","Data":"b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98"} Jan 29 12:54:29 crc kubenswrapper[4840]: I0129 12:54:29.997452 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6pkfg" event={"ID":"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0","Type":"ContainerDied","Data":"508fe0b8c0239a0e418a0cdf69f765dee85fdd371b545ab8e0ad568758a0f107"} Jan 29 12:54:29 crc kubenswrapper[4840]: I0129 12:54:29.997402 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6pkfg" Jan 29 12:54:29 crc kubenswrapper[4840]: I0129 12:54:29.997471 4840 scope.go:117] "RemoveContainer" containerID="b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.017824 4840 scope.go:117] "RemoveContainer" containerID="e6cd8aedb56d439e08226fb81c000493cfea26b579727a804f247086692ca0ed" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.025034 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wt6fn\" (UniqueName: \"kubernetes.io/projected/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-kube-api-access-wt6fn\") pod \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\" (UID: \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\") " Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.025289 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-catalog-content\") pod \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\" (UID: \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\") " Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.025334 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-utilities\") pod \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\" (UID: \"525b2aed-21e0-4bbe-98e0-41f7a86f0ad0\") " Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.026578 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-utilities" (OuterVolumeSpecName: "utilities") pod "525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" (UID: "525b2aed-21e0-4bbe-98e0-41f7a86f0ad0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.035672 4840 scope.go:117] "RemoveContainer" containerID="600ecfb46d1b7c5dc39ec6b77d78b6f368b75a9476cfa3a7daf95db4f3f52112" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.035756 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-kube-api-access-wt6fn" (OuterVolumeSpecName: "kube-api-access-wt6fn") pod "525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" (UID: "525b2aed-21e0-4bbe-98e0-41f7a86f0ad0"). InnerVolumeSpecName "kube-api-access-wt6fn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.049423 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" (UID: "525b2aed-21e0-4bbe-98e0-41f7a86f0ad0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.090409 4840 scope.go:117] "RemoveContainer" containerID="b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98" Jan 29 12:54:30 crc kubenswrapper[4840]: E0129 12:54:30.091128 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98\": container with ID starting with b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98 not found: ID does not exist" containerID="b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.091158 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98"} err="failed to get container status \"b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98\": rpc error: code = NotFound desc = could not find container \"b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98\": container with ID starting with b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98 not found: ID does not exist" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.091181 4840 scope.go:117] "RemoveContainer" containerID="e6cd8aedb56d439e08226fb81c000493cfea26b579727a804f247086692ca0ed" Jan 29 12:54:30 crc kubenswrapper[4840]: E0129 12:54:30.091445 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6cd8aedb56d439e08226fb81c000493cfea26b579727a804f247086692ca0ed\": container with ID starting with e6cd8aedb56d439e08226fb81c000493cfea26b579727a804f247086692ca0ed not found: ID does not exist" containerID="e6cd8aedb56d439e08226fb81c000493cfea26b579727a804f247086692ca0ed" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.091466 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6cd8aedb56d439e08226fb81c000493cfea26b579727a804f247086692ca0ed"} err="failed to get container status \"e6cd8aedb56d439e08226fb81c000493cfea26b579727a804f247086692ca0ed\": rpc error: code = NotFound desc = could not find container \"e6cd8aedb56d439e08226fb81c000493cfea26b579727a804f247086692ca0ed\": container with ID starting with e6cd8aedb56d439e08226fb81c000493cfea26b579727a804f247086692ca0ed not found: ID does not exist" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.091478 4840 scope.go:117] "RemoveContainer" containerID="600ecfb46d1b7c5dc39ec6b77d78b6f368b75a9476cfa3a7daf95db4f3f52112" Jan 29 12:54:30 crc kubenswrapper[4840]: E0129 12:54:30.092123 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"600ecfb46d1b7c5dc39ec6b77d78b6f368b75a9476cfa3a7daf95db4f3f52112\": container with ID starting with 600ecfb46d1b7c5dc39ec6b77d78b6f368b75a9476cfa3a7daf95db4f3f52112 not found: ID does not exist" containerID="600ecfb46d1b7c5dc39ec6b77d78b6f368b75a9476cfa3a7daf95db4f3f52112" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.092184 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"600ecfb46d1b7c5dc39ec6b77d78b6f368b75a9476cfa3a7daf95db4f3f52112"} err="failed to get container status \"600ecfb46d1b7c5dc39ec6b77d78b6f368b75a9476cfa3a7daf95db4f3f52112\": rpc error: code = NotFound desc = could not find container \"600ecfb46d1b7c5dc39ec6b77d78b6f368b75a9476cfa3a7daf95db4f3f52112\": container with ID starting with 600ecfb46d1b7c5dc39ec6b77d78b6f368b75a9476cfa3a7daf95db4f3f52112 not found: ID does not exist" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.127617 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.127652 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.127662 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wt6fn\" (UniqueName: \"kubernetes.io/projected/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0-kube-api-access-wt6fn\") on node \"crc\" DevicePath \"\"" Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.330529 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pkfg"] Jan 29 12:54:30 crc kubenswrapper[4840]: I0129 12:54:30.339826 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6pkfg"] Jan 29 12:54:31 crc kubenswrapper[4840]: I0129 12:54:31.009875 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" path="/var/lib/kubelet/pods/525b2aed-21e0-4bbe-98e0-41f7a86f0ad0/volumes" Jan 29 12:54:38 crc kubenswrapper[4840]: E0129 12:54:38.606041 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod525b2aed_21e0_4bbe_98e0_41f7a86f0ad0.slice/crio-b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98.scope\": RecentStats: unable to find data in memory cache]" Jan 29 12:54:46 crc kubenswrapper[4840]: I0129 12:54:46.559266 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-94shd_57eab084-b8aa-4678-b6fc-30f97fe7b52b/control-plane-machine-set-operator/0.log" Jan 29 12:54:46 crc kubenswrapper[4840]: I0129 12:54:46.765577 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-l64hx_03d4e817-3cad-4efb-ad16-dafdc1c56c8b/machine-api-operator/0.log" Jan 29 12:54:46 crc kubenswrapper[4840]: I0129 12:54:46.769249 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-l64hx_03d4e817-3cad-4efb-ad16-dafdc1c56c8b/kube-rbac-proxy/0.log" Jan 29 12:54:48 crc kubenswrapper[4840]: E0129 12:54:48.767509 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod525b2aed_21e0_4bbe_98e0_41f7a86f0ad0.slice/crio-b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98.scope\": RecentStats: unable to find data in memory cache]" Jan 29 12:54:53 crc kubenswrapper[4840]: I0129 12:54:53.521848 4840 patch_prober.go:28] interesting pod/machine-config-daemon-s2v8d container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 29 12:54:53 crc kubenswrapper[4840]: I0129 12:54:53.522647 4840 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 29 12:54:53 crc kubenswrapper[4840]: I0129 12:54:53.523393 4840 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" Jan 29 12:54:53 crc kubenswrapper[4840]: I0129 12:54:53.524484 4840 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951"} pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 29 12:54:53 crc kubenswrapper[4840]: I0129 12:54:53.524598 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerName="machine-config-daemon" containerID="cri-o://14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" gracePeriod=600 Jan 29 12:54:53 crc kubenswrapper[4840]: E0129 12:54:53.650481 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:54:54 crc kubenswrapper[4840]: I0129 12:54:54.158091 4840 generic.go:334] "Generic (PLEG): container finished" podID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" exitCode=0 Jan 29 12:54:54 crc kubenswrapper[4840]: I0129 12:54:54.158138 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerDied","Data":"14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951"} Jan 29 12:54:54 crc kubenswrapper[4840]: I0129 12:54:54.158175 4840 scope.go:117] "RemoveContainer" containerID="f1bc3c42fd768f598a10ff6a0dbd9462d5b2072505f933a5467b8f6ee3913930" Jan 29 12:54:54 crc kubenswrapper[4840]: I0129 12:54:54.158659 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:54:54 crc kubenswrapper[4840]: E0129 12:54:54.158958 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:54:58 crc kubenswrapper[4840]: I0129 12:54:58.510846 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-t8kjb_5f4c8ed2-e3ec-4ac9-86a2-1fdb4b3cf410/cert-manager-controller/0.log" Jan 29 12:54:58 crc kubenswrapper[4840]: I0129 12:54:58.675842 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-dssln_204fbe16-f032-48b3-8e82-8b302a5d0ef1/cert-manager-cainjector/0.log" Jan 29 12:54:58 crc kubenswrapper[4840]: I0129 12:54:58.736711 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-gxz5g_4952f2c9-a631-40ba-ac0c-96e4f64b52dd/cert-manager-webhook/0.log" Jan 29 12:54:58 crc kubenswrapper[4840]: E0129 12:54:58.962746 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod525b2aed_21e0_4bbe_98e0_41f7a86f0ad0.slice/crio-b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98.scope\": RecentStats: unable to find data in memory cache]" Jan 29 12:55:06 crc kubenswrapper[4840]: I0129 12:55:06.002546 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:55:06 crc kubenswrapper[4840]: E0129 12:55:06.004213 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:55:09 crc kubenswrapper[4840]: E0129 12:55:09.138136 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod525b2aed_21e0_4bbe_98e0_41f7a86f0ad0.slice/crio-b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98.scope\": RecentStats: unable to find data in memory cache]" Jan 29 12:55:10 crc kubenswrapper[4840]: I0129 12:55:10.871277 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-45hj6_676e6927-6277-4bf5-bb70-7b1142c3ff01/nmstate-console-plugin/0.log" Jan 29 12:55:11 crc kubenswrapper[4840]: I0129 12:55:11.021599 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-g88ch_1d956b0e-870f-4713-af30-f1726121d630/nmstate-handler/0.log" Jan 29 12:55:11 crc kubenswrapper[4840]: I0129 12:55:11.094316 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-htgbx_bab7acdd-5080-4d9f-9889-81c2a88f0dc7/kube-rbac-proxy/0.log" Jan 29 12:55:11 crc kubenswrapper[4840]: I0129 12:55:11.208666 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-htgbx_bab7acdd-5080-4d9f-9889-81c2a88f0dc7/nmstate-metrics/0.log" Jan 29 12:55:11 crc kubenswrapper[4840]: I0129 12:55:11.248612 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-qp4cd_31770d8e-79ba-43a3-81bd-d76b310b6acc/nmstate-operator/0.log" Jan 29 12:55:11 crc kubenswrapper[4840]: I0129 12:55:11.413540 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-bq4k4_3a8ff3f5-bc33-4d34-8a3d-9947121b510d/nmstate-webhook/0.log" Jan 29 12:55:19 crc kubenswrapper[4840]: E0129 12:55:19.326214 4840 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod525b2aed_21e0_4bbe_98e0_41f7a86f0ad0.slice/crio-b5cbccb6f80a396e785b0db68870355d5eeb7c1b57337481e1c743098c34bc98.scope\": RecentStats: unable to find data in memory cache]" Jan 29 12:55:21 crc kubenswrapper[4840]: I0129 12:55:21.001025 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:55:21 crc kubenswrapper[4840]: E0129 12:55:21.003222 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:55:33 crc kubenswrapper[4840]: I0129 12:55:33.001524 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:55:33 crc kubenswrapper[4840]: E0129 12:55:33.002866 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:55:39 crc kubenswrapper[4840]: I0129 12:55:39.294036 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wpf9f_d3338d0d-1a9b-4176-84b4-a708ea1b574c/kube-rbac-proxy/0.log" Jan 29 12:55:39 crc kubenswrapper[4840]: I0129 12:55:39.327464 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-wpf9f_d3338d0d-1a9b-4176-84b4-a708ea1b574c/controller/0.log" Jan 29 12:55:39 crc kubenswrapper[4840]: I0129 12:55:39.546114 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/cp-frr-files/0.log" Jan 29 12:55:39 crc kubenswrapper[4840]: I0129 12:55:39.786886 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/cp-frr-files/0.log" Jan 29 12:55:39 crc kubenswrapper[4840]: I0129 12:55:39.830450 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/cp-reloader/0.log" Jan 29 12:55:39 crc kubenswrapper[4840]: I0129 12:55:39.848878 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/cp-reloader/0.log" Jan 29 12:55:39 crc kubenswrapper[4840]: I0129 12:55:39.864643 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/cp-metrics/0.log" Jan 29 12:55:39 crc kubenswrapper[4840]: I0129 12:55:39.999607 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/cp-frr-files/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.042447 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/cp-metrics/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.092747 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/cp-reloader/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.111914 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/cp-metrics/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.312223 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/cp-frr-files/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.323726 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/cp-reloader/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.335985 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/controller/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.369544 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/cp-metrics/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.580119 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/frr-metrics/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.598161 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/kube-rbac-proxy/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.622650 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/kube-rbac-proxy-frr/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.793376 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/frr/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.860594 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-k5x6q_b91baf2c-fd04-4784-8d12-a5318088ee87/frr-k8s-webhook-server/0.log" Jan 29 12:55:40 crc kubenswrapper[4840]: I0129 12:55:40.873991 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-jtlnm_96f69b16-80eb-4b5c-bf75-39c26aefd643/reloader/0.log" Jan 29 12:55:41 crc kubenswrapper[4840]: I0129 12:55:41.084536 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-77c47cd585-6mx2x_d7492c31-2b6c-43e4-9cc9-73e03dd15384/manager/0.log" Jan 29 12:55:41 crc kubenswrapper[4840]: I0129 12:55:41.110520 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-569cfcf96-gqqkk_e2c078cf-cce4-480b-9c30-f0c86abee27d/webhook-server/0.log" Jan 29 12:55:41 crc kubenswrapper[4840]: I0129 12:55:41.291333 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g26s9_1e0994f2-a882-4579-b76f-e953f6b75a25/kube-rbac-proxy/0.log" Jan 29 12:55:41 crc kubenswrapper[4840]: I0129 12:55:41.440721 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-g26s9_1e0994f2-a882-4579-b76f-e953f6b75a25/speaker/0.log" Jan 29 12:55:47 crc kubenswrapper[4840]: I0129 12:55:47.002312 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:55:47 crc kubenswrapper[4840]: E0129 12:55:47.003258 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:55:56 crc kubenswrapper[4840]: I0129 12:55:56.586531 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s_a9fe01c2-57da-44b1-be60-e0d27ffc98b8/util/0.log" Jan 29 12:55:56 crc kubenswrapper[4840]: I0129 12:55:56.848391 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s_a9fe01c2-57da-44b1-be60-e0d27ffc98b8/pull/0.log" Jan 29 12:55:56 crc kubenswrapper[4840]: I0129 12:55:56.873127 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s_a9fe01c2-57da-44b1-be60-e0d27ffc98b8/util/0.log" Jan 29 12:55:56 crc kubenswrapper[4840]: I0129 12:55:56.895122 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s_a9fe01c2-57da-44b1-be60-e0d27ffc98b8/pull/0.log" Jan 29 12:55:57 crc kubenswrapper[4840]: I0129 12:55:57.153550 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s_a9fe01c2-57da-44b1-be60-e0d27ffc98b8/pull/0.log" Jan 29 12:55:57 crc kubenswrapper[4840]: I0129 12:55:57.183505 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s_a9fe01c2-57da-44b1-be60-e0d27ffc98b8/util/0.log" Jan 29 12:55:57 crc kubenswrapper[4840]: I0129 12:55:57.227491 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc46v8s_a9fe01c2-57da-44b1-be60-e0d27ffc98b8/extract/0.log" Jan 29 12:55:57 crc kubenswrapper[4840]: I0129 12:55:57.371693 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh_4dd390ad-1343-4254-8e83-b36ce6269dd4/util/0.log" Jan 29 12:55:57 crc kubenswrapper[4840]: I0129 12:55:57.611370 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh_4dd390ad-1343-4254-8e83-b36ce6269dd4/util/0.log" Jan 29 12:55:57 crc kubenswrapper[4840]: I0129 12:55:57.636343 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh_4dd390ad-1343-4254-8e83-b36ce6269dd4/pull/0.log" Jan 29 12:55:57 crc kubenswrapper[4840]: I0129 12:55:57.646146 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh_4dd390ad-1343-4254-8e83-b36ce6269dd4/pull/0.log" Jan 29 12:55:57 crc kubenswrapper[4840]: I0129 12:55:57.830901 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh_4dd390ad-1343-4254-8e83-b36ce6269dd4/util/0.log" Jan 29 12:55:57 crc kubenswrapper[4840]: I0129 12:55:57.861549 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh_4dd390ad-1343-4254-8e83-b36ce6269dd4/pull/0.log" Jan 29 12:55:57 crc kubenswrapper[4840]: I0129 12:55:57.895579 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec7132xxjh_4dd390ad-1343-4254-8e83-b36ce6269dd4/extract/0.log" Jan 29 12:55:58 crc kubenswrapper[4840]: I0129 12:55:58.047378 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dlf42_5eca2916-9f95-4cf4-b35f-7aebe5f09e19/extract-utilities/0.log" Jan 29 12:55:58 crc kubenswrapper[4840]: I0129 12:55:58.277808 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dlf42_5eca2916-9f95-4cf4-b35f-7aebe5f09e19/extract-content/0.log" Jan 29 12:55:58 crc kubenswrapper[4840]: I0129 12:55:58.300631 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dlf42_5eca2916-9f95-4cf4-b35f-7aebe5f09e19/extract-utilities/0.log" Jan 29 12:55:58 crc kubenswrapper[4840]: I0129 12:55:58.306147 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dlf42_5eca2916-9f95-4cf4-b35f-7aebe5f09e19/extract-content/0.log" Jan 29 12:55:58 crc kubenswrapper[4840]: I0129 12:55:58.483185 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dlf42_5eca2916-9f95-4cf4-b35f-7aebe5f09e19/extract-content/0.log" Jan 29 12:55:58 crc kubenswrapper[4840]: I0129 12:55:58.493231 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dlf42_5eca2916-9f95-4cf4-b35f-7aebe5f09e19/extract-utilities/0.log" Jan 29 12:55:58 crc kubenswrapper[4840]: I0129 12:55:58.700669 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cm45d_cd2d39b4-a2bb-4311-872e-a9591621717f/extract-utilities/0.log" Jan 29 12:55:58 crc kubenswrapper[4840]: I0129 12:55:58.795435 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-dlf42_5eca2916-9f95-4cf4-b35f-7aebe5f09e19/registry-server/0.log" Jan 29 12:55:58 crc kubenswrapper[4840]: I0129 12:55:58.964128 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cm45d_cd2d39b4-a2bb-4311-872e-a9591621717f/extract-utilities/0.log" Jan 29 12:55:59 crc kubenswrapper[4840]: I0129 12:55:59.005564 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:55:59 crc kubenswrapper[4840]: E0129 12:55:59.005893 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:55:59 crc kubenswrapper[4840]: I0129 12:55:59.010512 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cm45d_cd2d39b4-a2bb-4311-872e-a9591621717f/extract-content/0.log" Jan 29 12:55:59 crc kubenswrapper[4840]: I0129 12:55:59.040876 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cm45d_cd2d39b4-a2bb-4311-872e-a9591621717f/extract-content/0.log" Jan 29 12:55:59 crc kubenswrapper[4840]: I0129 12:55:59.186972 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cm45d_cd2d39b4-a2bb-4311-872e-a9591621717f/extract-utilities/0.log" Jan 29 12:55:59 crc kubenswrapper[4840]: I0129 12:55:59.191385 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cm45d_cd2d39b4-a2bb-4311-872e-a9591621717f/extract-content/0.log" Jan 29 12:55:59 crc kubenswrapper[4840]: I0129 12:55:59.488359 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-6ssn6_72c7360b-b452-432c-a48c-319d98003756/marketplace-operator/0.log" Jan 29 12:55:59 crc kubenswrapper[4840]: I0129 12:55:59.557317 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-cm45d_cd2d39b4-a2bb-4311-872e-a9591621717f/registry-server/0.log" Jan 29 12:55:59 crc kubenswrapper[4840]: I0129 12:55:59.586029 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78p7s_c6495f02-13cd-40e3-85d9-5b63d3691f0a/extract-utilities/0.log" Jan 29 12:55:59 crc kubenswrapper[4840]: I0129 12:55:59.878027 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78p7s_c6495f02-13cd-40e3-85d9-5b63d3691f0a/extract-utilities/0.log" Jan 29 12:55:59 crc kubenswrapper[4840]: I0129 12:55:59.897600 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78p7s_c6495f02-13cd-40e3-85d9-5b63d3691f0a/extract-content/0.log" Jan 29 12:55:59 crc kubenswrapper[4840]: I0129 12:55:59.929255 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78p7s_c6495f02-13cd-40e3-85d9-5b63d3691f0a/extract-content/0.log" Jan 29 12:56:00 crc kubenswrapper[4840]: I0129 12:56:00.200374 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78p7s_c6495f02-13cd-40e3-85d9-5b63d3691f0a/extract-utilities/0.log" Jan 29 12:56:00 crc kubenswrapper[4840]: I0129 12:56:00.234021 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nt7tn_744fc57c-46df-4ef5-b1f4-2dceb4f52a66/extract-utilities/0.log" Jan 29 12:56:00 crc kubenswrapper[4840]: I0129 12:56:00.258368 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78p7s_c6495f02-13cd-40e3-85d9-5b63d3691f0a/registry-server/0.log" Jan 29 12:56:00 crc kubenswrapper[4840]: I0129 12:56:00.278933 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-78p7s_c6495f02-13cd-40e3-85d9-5b63d3691f0a/extract-content/0.log" Jan 29 12:56:00 crc kubenswrapper[4840]: I0129 12:56:00.505295 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nt7tn_744fc57c-46df-4ef5-b1f4-2dceb4f52a66/extract-utilities/0.log" Jan 29 12:56:00 crc kubenswrapper[4840]: I0129 12:56:00.515838 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nt7tn_744fc57c-46df-4ef5-b1f4-2dceb4f52a66/extract-content/0.log" Jan 29 12:56:00 crc kubenswrapper[4840]: I0129 12:56:00.516861 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nt7tn_744fc57c-46df-4ef5-b1f4-2dceb4f52a66/extract-content/0.log" Jan 29 12:56:00 crc kubenswrapper[4840]: I0129 12:56:00.758249 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nt7tn_744fc57c-46df-4ef5-b1f4-2dceb4f52a66/extract-content/0.log" Jan 29 12:56:00 crc kubenswrapper[4840]: I0129 12:56:00.808024 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nt7tn_744fc57c-46df-4ef5-b1f4-2dceb4f52a66/extract-utilities/0.log" Jan 29 12:56:01 crc kubenswrapper[4840]: I0129 12:56:01.154148 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-nt7tn_744fc57c-46df-4ef5-b1f4-2dceb4f52a66/registry-server/0.log" Jan 29 12:56:14 crc kubenswrapper[4840]: I0129 12:56:14.001237 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:56:14 crc kubenswrapper[4840]: E0129 12:56:14.002109 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:56:27 crc kubenswrapper[4840]: I0129 12:56:27.004638 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:56:27 crc kubenswrapper[4840]: E0129 12:56:27.005833 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:56:39 crc kubenswrapper[4840]: I0129 12:56:39.004775 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:56:39 crc kubenswrapper[4840]: E0129 12:56:39.006676 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:56:53 crc kubenswrapper[4840]: I0129 12:56:53.001423 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:56:53 crc kubenswrapper[4840]: E0129 12:56:53.002289 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:57:04 crc kubenswrapper[4840]: I0129 12:57:04.008774 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:57:04 crc kubenswrapper[4840]: E0129 12:57:04.009682 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:57:13 crc kubenswrapper[4840]: I0129 12:57:13.723106 4840 generic.go:334] "Generic (PLEG): container finished" podID="111aa409-55e1-4791-8ee0-574fc225780a" containerID="1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6" exitCode=0 Jan 29 12:57:13 crc kubenswrapper[4840]: I0129 12:57:13.723181 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-78rwl/must-gather-zjfhh" event={"ID":"111aa409-55e1-4791-8ee0-574fc225780a","Type":"ContainerDied","Data":"1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6"} Jan 29 12:57:13 crc kubenswrapper[4840]: I0129 12:57:13.724024 4840 scope.go:117] "RemoveContainer" containerID="1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6" Jan 29 12:57:14 crc kubenswrapper[4840]: I0129 12:57:14.304692 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-78rwl_must-gather-zjfhh_111aa409-55e1-4791-8ee0-574fc225780a/gather/0.log" Jan 29 12:57:16 crc kubenswrapper[4840]: I0129 12:57:16.001561 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:57:16 crc kubenswrapper[4840]: E0129 12:57:16.002775 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.221665 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-78rwl/must-gather-zjfhh"] Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.222481 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-78rwl/must-gather-zjfhh" podUID="111aa409-55e1-4791-8ee0-574fc225780a" containerName="copy" containerID="cri-o://74c343e145bc44eb6e2c6810cc38157ef97d6bf9d74c87dec620953ed09fc459" gracePeriod=2 Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.228560 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-78rwl/must-gather-zjfhh"] Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.668419 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-78rwl_must-gather-zjfhh_111aa409-55e1-4791-8ee0-574fc225780a/copy/0.log" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.671459 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78rwl/must-gather-zjfhh" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.737364 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/111aa409-55e1-4791-8ee0-574fc225780a-must-gather-output\") pod \"111aa409-55e1-4791-8ee0-574fc225780a\" (UID: \"111aa409-55e1-4791-8ee0-574fc225780a\") " Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.737464 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtvqf\" (UniqueName: \"kubernetes.io/projected/111aa409-55e1-4791-8ee0-574fc225780a-kube-api-access-rtvqf\") pod \"111aa409-55e1-4791-8ee0-574fc225780a\" (UID: \"111aa409-55e1-4791-8ee0-574fc225780a\") " Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.743042 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/111aa409-55e1-4791-8ee0-574fc225780a-kube-api-access-rtvqf" (OuterVolumeSpecName: "kube-api-access-rtvqf") pod "111aa409-55e1-4791-8ee0-574fc225780a" (UID: "111aa409-55e1-4791-8ee0-574fc225780a"). InnerVolumeSpecName "kube-api-access-rtvqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.786695 4840 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-78rwl_must-gather-zjfhh_111aa409-55e1-4791-8ee0-574fc225780a/copy/0.log" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.786964 4840 generic.go:334] "Generic (PLEG): container finished" podID="111aa409-55e1-4791-8ee0-574fc225780a" containerID="74c343e145bc44eb6e2c6810cc38157ef97d6bf9d74c87dec620953ed09fc459" exitCode=143 Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.787013 4840 scope.go:117] "RemoveContainer" containerID="74c343e145bc44eb6e2c6810cc38157ef97d6bf9d74c87dec620953ed09fc459" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.787117 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-78rwl/must-gather-zjfhh" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.803912 4840 scope.go:117] "RemoveContainer" containerID="1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.822352 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/111aa409-55e1-4791-8ee0-574fc225780a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "111aa409-55e1-4791-8ee0-574fc225780a" (UID: "111aa409-55e1-4791-8ee0-574fc225780a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.838994 4840 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/111aa409-55e1-4791-8ee0-574fc225780a-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.839289 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtvqf\" (UniqueName: \"kubernetes.io/projected/111aa409-55e1-4791-8ee0-574fc225780a-kube-api-access-rtvqf\") on node \"crc\" DevicePath \"\"" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.859308 4840 scope.go:117] "RemoveContainer" containerID="74c343e145bc44eb6e2c6810cc38157ef97d6bf9d74c87dec620953ed09fc459" Jan 29 12:57:21 crc kubenswrapper[4840]: E0129 12:57:21.861143 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74c343e145bc44eb6e2c6810cc38157ef97d6bf9d74c87dec620953ed09fc459\": container with ID starting with 74c343e145bc44eb6e2c6810cc38157ef97d6bf9d74c87dec620953ed09fc459 not found: ID does not exist" containerID="74c343e145bc44eb6e2c6810cc38157ef97d6bf9d74c87dec620953ed09fc459" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.861193 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c343e145bc44eb6e2c6810cc38157ef97d6bf9d74c87dec620953ed09fc459"} err="failed to get container status \"74c343e145bc44eb6e2c6810cc38157ef97d6bf9d74c87dec620953ed09fc459\": rpc error: code = NotFound desc = could not find container \"74c343e145bc44eb6e2c6810cc38157ef97d6bf9d74c87dec620953ed09fc459\": container with ID starting with 74c343e145bc44eb6e2c6810cc38157ef97d6bf9d74c87dec620953ed09fc459 not found: ID does not exist" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.861225 4840 scope.go:117] "RemoveContainer" containerID="1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6" Jan 29 12:57:21 crc kubenswrapper[4840]: E0129 12:57:21.861596 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6\": container with ID starting with 1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6 not found: ID does not exist" containerID="1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6" Jan 29 12:57:21 crc kubenswrapper[4840]: I0129 12:57:21.861622 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6"} err="failed to get container status \"1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6\": rpc error: code = NotFound desc = could not find container \"1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6\": container with ID starting with 1c8e12a2104d161cfa8b93f1c0f96bbf85f3367e259b676caa3698a2ad7268b6 not found: ID does not exist" Jan 29 12:57:23 crc kubenswrapper[4840]: I0129 12:57:23.009262 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="111aa409-55e1-4791-8ee0-574fc225780a" path="/var/lib/kubelet/pods/111aa409-55e1-4791-8ee0-574fc225780a/volumes" Jan 29 12:57:27 crc kubenswrapper[4840]: I0129 12:57:27.001828 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:57:27 crc kubenswrapper[4840]: E0129 12:57:27.002371 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.010156 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:57:40 crc kubenswrapper[4840]: E0129 12:57:40.011189 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.644421 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5wwn7"] Jan 29 12:57:40 crc kubenswrapper[4840]: E0129 12:57:40.644760 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" containerName="registry-server" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.644782 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" containerName="registry-server" Jan 29 12:57:40 crc kubenswrapper[4840]: E0129 12:57:40.644794 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" containerName="extract-content" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.644801 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" containerName="extract-content" Jan 29 12:57:40 crc kubenswrapper[4840]: E0129 12:57:40.644811 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" containerName="extract-utilities" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.644817 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" containerName="extract-utilities" Jan 29 12:57:40 crc kubenswrapper[4840]: E0129 12:57:40.644833 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111aa409-55e1-4791-8ee0-574fc225780a" containerName="copy" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.644839 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="111aa409-55e1-4791-8ee0-574fc225780a" containerName="copy" Jan 29 12:57:40 crc kubenswrapper[4840]: E0129 12:57:40.644853 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="111aa409-55e1-4791-8ee0-574fc225780a" containerName="gather" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.644858 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="111aa409-55e1-4791-8ee0-574fc225780a" containerName="gather" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.645113 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="111aa409-55e1-4791-8ee0-574fc225780a" containerName="gather" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.645127 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="525b2aed-21e0-4bbe-98e0-41f7a86f0ad0" containerName="registry-server" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.645139 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="111aa409-55e1-4791-8ee0-574fc225780a" containerName="copy" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.646176 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.664409 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wwn7"] Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.706895 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c63c703-ddef-459f-bea4-0592d8cbf6e9-utilities\") pod \"certified-operators-5wwn7\" (UID: \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\") " pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.706963 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c63c703-ddef-459f-bea4-0592d8cbf6e9-catalog-content\") pod \"certified-operators-5wwn7\" (UID: \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\") " pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.707009 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwxhn\" (UniqueName: \"kubernetes.io/projected/5c63c703-ddef-459f-bea4-0592d8cbf6e9-kube-api-access-mwxhn\") pod \"certified-operators-5wwn7\" (UID: \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\") " pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.808873 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c63c703-ddef-459f-bea4-0592d8cbf6e9-catalog-content\") pod \"certified-operators-5wwn7\" (UID: \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\") " pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.808988 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwxhn\" (UniqueName: \"kubernetes.io/projected/5c63c703-ddef-459f-bea4-0592d8cbf6e9-kube-api-access-mwxhn\") pod \"certified-operators-5wwn7\" (UID: \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\") " pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.809083 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c63c703-ddef-459f-bea4-0592d8cbf6e9-utilities\") pod \"certified-operators-5wwn7\" (UID: \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\") " pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.809588 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c63c703-ddef-459f-bea4-0592d8cbf6e9-utilities\") pod \"certified-operators-5wwn7\" (UID: \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\") " pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.809904 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c63c703-ddef-459f-bea4-0592d8cbf6e9-catalog-content\") pod \"certified-operators-5wwn7\" (UID: \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\") " pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.832998 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwxhn\" (UniqueName: \"kubernetes.io/projected/5c63c703-ddef-459f-bea4-0592d8cbf6e9-kube-api-access-mwxhn\") pod \"certified-operators-5wwn7\" (UID: \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\") " pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:40 crc kubenswrapper[4840]: I0129 12:57:40.966828 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:41 crc kubenswrapper[4840]: I0129 12:57:41.412253 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5wwn7"] Jan 29 12:57:41 crc kubenswrapper[4840]: I0129 12:57:41.915671 4840 generic.go:334] "Generic (PLEG): container finished" podID="5c63c703-ddef-459f-bea4-0592d8cbf6e9" containerID="78076ad8a9e5d58288e7da276c449d1247a829bbd88495e16549738e921d52b9" exitCode=0 Jan 29 12:57:41 crc kubenswrapper[4840]: I0129 12:57:41.915732 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wwn7" event={"ID":"5c63c703-ddef-459f-bea4-0592d8cbf6e9","Type":"ContainerDied","Data":"78076ad8a9e5d58288e7da276c449d1247a829bbd88495e16549738e921d52b9"} Jan 29 12:57:41 crc kubenswrapper[4840]: I0129 12:57:41.916303 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wwn7" event={"ID":"5c63c703-ddef-459f-bea4-0592d8cbf6e9","Type":"ContainerStarted","Data":"aad16452f09b1945fb46c5f9da2f883d466812602c71b828fb15b362ca901f45"} Jan 29 12:57:42 crc kubenswrapper[4840]: I0129 12:57:42.923997 4840 generic.go:334] "Generic (PLEG): container finished" podID="5c63c703-ddef-459f-bea4-0592d8cbf6e9" containerID="66dcd81227e7aa47eb35aadf0d6942c2903966488eb568c7a10483ec8183ffab" exitCode=0 Jan 29 12:57:42 crc kubenswrapper[4840]: I0129 12:57:42.924165 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wwn7" event={"ID":"5c63c703-ddef-459f-bea4-0592d8cbf6e9","Type":"ContainerDied","Data":"66dcd81227e7aa47eb35aadf0d6942c2903966488eb568c7a10483ec8183ffab"} Jan 29 12:57:44 crc kubenswrapper[4840]: I0129 12:57:44.938554 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wwn7" event={"ID":"5c63c703-ddef-459f-bea4-0592d8cbf6e9","Type":"ContainerStarted","Data":"18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9"} Jan 29 12:57:44 crc kubenswrapper[4840]: I0129 12:57:44.961642 4840 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5wwn7" podStartSLOduration=2.940782449 podStartE2EDuration="4.961623625s" podCreationTimestamp="2026-01-29 12:57:40 +0000 UTC" firstStartedPulling="2026-01-29 12:57:41.916895161 +0000 UTC m=+3193.579875054" lastFinishedPulling="2026-01-29 12:57:43.937736337 +0000 UTC m=+3195.600716230" observedRunningTime="2026-01-29 12:57:44.954535652 +0000 UTC m=+3196.617515565" watchObservedRunningTime="2026-01-29 12:57:44.961623625 +0000 UTC m=+3196.624603508" Jan 29 12:57:50 crc kubenswrapper[4840]: I0129 12:57:50.967011 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:50 crc kubenswrapper[4840]: I0129 12:57:50.968100 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:51 crc kubenswrapper[4840]: I0129 12:57:51.020459 4840 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:51 crc kubenswrapper[4840]: I0129 12:57:51.064934 4840 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:51 crc kubenswrapper[4840]: I0129 12:57:51.257619 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wwn7"] Jan 29 12:57:52 crc kubenswrapper[4840]: I0129 12:57:52.001494 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:57:52 crc kubenswrapper[4840]: E0129 12:57:52.001702 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:57:52 crc kubenswrapper[4840]: I0129 12:57:52.991411 4840 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5wwn7" podUID="5c63c703-ddef-459f-bea4-0592d8cbf6e9" containerName="registry-server" containerID="cri-o://18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9" gracePeriod=2 Jan 29 12:57:53 crc kubenswrapper[4840]: I0129 12:57:53.381865 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:53 crc kubenswrapper[4840]: I0129 12:57:53.486693 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwxhn\" (UniqueName: \"kubernetes.io/projected/5c63c703-ddef-459f-bea4-0592d8cbf6e9-kube-api-access-mwxhn\") pod \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\" (UID: \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\") " Jan 29 12:57:53 crc kubenswrapper[4840]: I0129 12:57:53.486786 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c63c703-ddef-459f-bea4-0592d8cbf6e9-utilities\") pod \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\" (UID: \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\") " Jan 29 12:57:53 crc kubenswrapper[4840]: I0129 12:57:53.486812 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c63c703-ddef-459f-bea4-0592d8cbf6e9-catalog-content\") pod \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\" (UID: \"5c63c703-ddef-459f-bea4-0592d8cbf6e9\") " Jan 29 12:57:53 crc kubenswrapper[4840]: I0129 12:57:53.487734 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c63c703-ddef-459f-bea4-0592d8cbf6e9-utilities" (OuterVolumeSpecName: "utilities") pod "5c63c703-ddef-459f-bea4-0592d8cbf6e9" (UID: "5c63c703-ddef-459f-bea4-0592d8cbf6e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:57:53 crc kubenswrapper[4840]: I0129 12:57:53.493604 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c63c703-ddef-459f-bea4-0592d8cbf6e9-kube-api-access-mwxhn" (OuterVolumeSpecName: "kube-api-access-mwxhn") pod "5c63c703-ddef-459f-bea4-0592d8cbf6e9" (UID: "5c63c703-ddef-459f-bea4-0592d8cbf6e9"). InnerVolumeSpecName "kube-api-access-mwxhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 12:57:53 crc kubenswrapper[4840]: I0129 12:57:53.588311 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwxhn\" (UniqueName: \"kubernetes.io/projected/5c63c703-ddef-459f-bea4-0592d8cbf6e9-kube-api-access-mwxhn\") on node \"crc\" DevicePath \"\"" Jan 29 12:57:53 crc kubenswrapper[4840]: I0129 12:57:53.588610 4840 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c63c703-ddef-459f-bea4-0592d8cbf6e9-utilities\") on node \"crc\" DevicePath \"\"" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.001566 4840 generic.go:334] "Generic (PLEG): container finished" podID="5c63c703-ddef-459f-bea4-0592d8cbf6e9" containerID="18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9" exitCode=0 Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.001634 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wwn7" event={"ID":"5c63c703-ddef-459f-bea4-0592d8cbf6e9","Type":"ContainerDied","Data":"18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9"} Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.001681 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5wwn7" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.002409 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5wwn7" event={"ID":"5c63c703-ddef-459f-bea4-0592d8cbf6e9","Type":"ContainerDied","Data":"aad16452f09b1945fb46c5f9da2f883d466812602c71b828fb15b362ca901f45"} Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.002452 4840 scope.go:117] "RemoveContainer" containerID="18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.022389 4840 scope.go:117] "RemoveContainer" containerID="66dcd81227e7aa47eb35aadf0d6942c2903966488eb568c7a10483ec8183ffab" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.052012 4840 scope.go:117] "RemoveContainer" containerID="78076ad8a9e5d58288e7da276c449d1247a829bbd88495e16549738e921d52b9" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.070110 4840 scope.go:117] "RemoveContainer" containerID="18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9" Jan 29 12:57:54 crc kubenswrapper[4840]: E0129 12:57:54.070641 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9\": container with ID starting with 18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9 not found: ID does not exist" containerID="18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.070699 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9"} err="failed to get container status \"18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9\": rpc error: code = NotFound desc = could not find container \"18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9\": container with ID starting with 18c5dd5302007ad438e54a382109cac75558a05f3b32e0f3fbbcf68f25725cb9 not found: ID does not exist" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.070731 4840 scope.go:117] "RemoveContainer" containerID="66dcd81227e7aa47eb35aadf0d6942c2903966488eb568c7a10483ec8183ffab" Jan 29 12:57:54 crc kubenswrapper[4840]: E0129 12:57:54.071332 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66dcd81227e7aa47eb35aadf0d6942c2903966488eb568c7a10483ec8183ffab\": container with ID starting with 66dcd81227e7aa47eb35aadf0d6942c2903966488eb568c7a10483ec8183ffab not found: ID does not exist" containerID="66dcd81227e7aa47eb35aadf0d6942c2903966488eb568c7a10483ec8183ffab" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.071352 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66dcd81227e7aa47eb35aadf0d6942c2903966488eb568c7a10483ec8183ffab"} err="failed to get container status \"66dcd81227e7aa47eb35aadf0d6942c2903966488eb568c7a10483ec8183ffab\": rpc error: code = NotFound desc = could not find container \"66dcd81227e7aa47eb35aadf0d6942c2903966488eb568c7a10483ec8183ffab\": container with ID starting with 66dcd81227e7aa47eb35aadf0d6942c2903966488eb568c7a10483ec8183ffab not found: ID does not exist" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.071366 4840 scope.go:117] "RemoveContainer" containerID="78076ad8a9e5d58288e7da276c449d1247a829bbd88495e16549738e921d52b9" Jan 29 12:57:54 crc kubenswrapper[4840]: E0129 12:57:54.071710 4840 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78076ad8a9e5d58288e7da276c449d1247a829bbd88495e16549738e921d52b9\": container with ID starting with 78076ad8a9e5d58288e7da276c449d1247a829bbd88495e16549738e921d52b9 not found: ID does not exist" containerID="78076ad8a9e5d58288e7da276c449d1247a829bbd88495e16549738e921d52b9" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.071756 4840 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78076ad8a9e5d58288e7da276c449d1247a829bbd88495e16549738e921d52b9"} err="failed to get container status \"78076ad8a9e5d58288e7da276c449d1247a829bbd88495e16549738e921d52b9\": rpc error: code = NotFound desc = could not find container \"78076ad8a9e5d58288e7da276c449d1247a829bbd88495e16549738e921d52b9\": container with ID starting with 78076ad8a9e5d58288e7da276c449d1247a829bbd88495e16549738e921d52b9 not found: ID does not exist" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.482239 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c63c703-ddef-459f-bea4-0592d8cbf6e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c63c703-ddef-459f-bea4-0592d8cbf6e9" (UID: "5c63c703-ddef-459f-bea4-0592d8cbf6e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.504189 4840 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c63c703-ddef-459f-bea4-0592d8cbf6e9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.642224 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5wwn7"] Jan 29 12:57:54 crc kubenswrapper[4840]: I0129 12:57:54.651230 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5wwn7"] Jan 29 12:57:55 crc kubenswrapper[4840]: I0129 12:57:55.009199 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c63c703-ddef-459f-bea4-0592d8cbf6e9" path="/var/lib/kubelet/pods/5c63c703-ddef-459f-bea4-0592d8cbf6e9/volumes" Jan 29 12:58:05 crc kubenswrapper[4840]: I0129 12:58:05.001844 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:58:05 crc kubenswrapper[4840]: E0129 12:58:05.002813 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:58:18 crc kubenswrapper[4840]: I0129 12:58:18.001210 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:58:18 crc kubenswrapper[4840]: E0129 12:58:18.001878 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:58:31 crc kubenswrapper[4840]: I0129 12:58:31.001786 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:58:31 crc kubenswrapper[4840]: E0129 12:58:31.002752 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:58:42 crc kubenswrapper[4840]: I0129 12:58:42.001135 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:58:42 crc kubenswrapper[4840]: E0129 12:58:42.001864 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:58:57 crc kubenswrapper[4840]: I0129 12:58:57.001158 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:58:57 crc kubenswrapper[4840]: E0129 12:58:57.001981 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:59:08 crc kubenswrapper[4840]: I0129 12:59:08.001451 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:59:08 crc kubenswrapper[4840]: E0129 12:59:08.003302 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:59:19 crc kubenswrapper[4840]: I0129 12:59:19.004079 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:59:19 crc kubenswrapper[4840]: E0129 12:59:19.006019 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:59:31 crc kubenswrapper[4840]: I0129 12:59:31.002233 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:59:31 crc kubenswrapper[4840]: E0129 12:59:31.003103 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:59:42 crc kubenswrapper[4840]: I0129 12:59:42.002308 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:59:42 crc kubenswrapper[4840]: E0129 12:59:42.003997 4840 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-s2v8d_openshift-machine-config-operator(8bbaf604-6946-4bca-96af-be0e5fc92cf3)\"" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" podUID="8bbaf604-6946-4bca-96af-be0e5fc92cf3" Jan 29 12:59:58 crc kubenswrapper[4840]: I0129 12:59:58.002800 4840 scope.go:117] "RemoveContainer" containerID="14bc120fbc8ec1e3f57142c5cd80e51f3eba4249a3ba2b5652994d024e5e3951" Jan 29 12:59:58 crc kubenswrapper[4840]: I0129 12:59:58.958462 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-s2v8d" event={"ID":"8bbaf604-6946-4bca-96af-be0e5fc92cf3","Type":"ContainerStarted","Data":"f300ef63eb6b1992963901f39925bd5a623e1e8aaab72190d2a22e3b75dc297d"} Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.146318 4840 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m"] Jan 29 13:00:00 crc kubenswrapper[4840]: E0129 13:00:00.146748 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c63c703-ddef-459f-bea4-0592d8cbf6e9" containerName="extract-content" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.146766 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c63c703-ddef-459f-bea4-0592d8cbf6e9" containerName="extract-content" Jan 29 13:00:00 crc kubenswrapper[4840]: E0129 13:00:00.146802 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c63c703-ddef-459f-bea4-0592d8cbf6e9" containerName="registry-server" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.146812 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c63c703-ddef-459f-bea4-0592d8cbf6e9" containerName="registry-server" Jan 29 13:00:00 crc kubenswrapper[4840]: E0129 13:00:00.146827 4840 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c63c703-ddef-459f-bea4-0592d8cbf6e9" containerName="extract-utilities" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.146835 4840 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c63c703-ddef-459f-bea4-0592d8cbf6e9" containerName="extract-utilities" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.147016 4840 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c63c703-ddef-459f-bea4-0592d8cbf6e9" containerName="registry-server" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.147556 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.150281 4840 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.150423 4840 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.168724 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m"] Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.241620 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/437ae1ac-0699-44a2-9352-fe0a608b3d05-config-volume\") pod \"collect-profiles-29494860-b8h8m\" (UID: \"437ae1ac-0699-44a2-9352-fe0a608b3d05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.243221 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntx6p\" (UniqueName: \"kubernetes.io/projected/437ae1ac-0699-44a2-9352-fe0a608b3d05-kube-api-access-ntx6p\") pod \"collect-profiles-29494860-b8h8m\" (UID: \"437ae1ac-0699-44a2-9352-fe0a608b3d05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.243514 4840 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/437ae1ac-0699-44a2-9352-fe0a608b3d05-secret-volume\") pod \"collect-profiles-29494860-b8h8m\" (UID: \"437ae1ac-0699-44a2-9352-fe0a608b3d05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.345380 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntx6p\" (UniqueName: \"kubernetes.io/projected/437ae1ac-0699-44a2-9352-fe0a608b3d05-kube-api-access-ntx6p\") pod \"collect-profiles-29494860-b8h8m\" (UID: \"437ae1ac-0699-44a2-9352-fe0a608b3d05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.345484 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/437ae1ac-0699-44a2-9352-fe0a608b3d05-secret-volume\") pod \"collect-profiles-29494860-b8h8m\" (UID: \"437ae1ac-0699-44a2-9352-fe0a608b3d05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.345571 4840 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/437ae1ac-0699-44a2-9352-fe0a608b3d05-config-volume\") pod \"collect-profiles-29494860-b8h8m\" (UID: \"437ae1ac-0699-44a2-9352-fe0a608b3d05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.346737 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/437ae1ac-0699-44a2-9352-fe0a608b3d05-config-volume\") pod \"collect-profiles-29494860-b8h8m\" (UID: \"437ae1ac-0699-44a2-9352-fe0a608b3d05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.355812 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/437ae1ac-0699-44a2-9352-fe0a608b3d05-secret-volume\") pod \"collect-profiles-29494860-b8h8m\" (UID: \"437ae1ac-0699-44a2-9352-fe0a608b3d05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.364933 4840 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntx6p\" (UniqueName: \"kubernetes.io/projected/437ae1ac-0699-44a2-9352-fe0a608b3d05-kube-api-access-ntx6p\") pod \"collect-profiles-29494860-b8h8m\" (UID: \"437ae1ac-0699-44a2-9352-fe0a608b3d05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.467043 4840 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.899188 4840 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m"] Jan 29 13:00:00 crc kubenswrapper[4840]: I0129 13:00:00.977218 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" event={"ID":"437ae1ac-0699-44a2-9352-fe0a608b3d05","Type":"ContainerStarted","Data":"5c8a05cad8247b370f3c421c1707e3c3bdbf6df48396207588353175d02e21ff"} Jan 29 13:00:01 crc kubenswrapper[4840]: I0129 13:00:01.987255 4840 generic.go:334] "Generic (PLEG): container finished" podID="437ae1ac-0699-44a2-9352-fe0a608b3d05" containerID="de6f93b761a06160e3ee088df45e4f5b7759e8e18853e4b003ab482ad7b18712" exitCode=0 Jan 29 13:00:01 crc kubenswrapper[4840]: I0129 13:00:01.987368 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" event={"ID":"437ae1ac-0699-44a2-9352-fe0a608b3d05","Type":"ContainerDied","Data":"de6f93b761a06160e3ee088df45e4f5b7759e8e18853e4b003ab482ad7b18712"} Jan 29 13:00:03 crc kubenswrapper[4840]: I0129 13:00:03.307692 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:03 crc kubenswrapper[4840]: I0129 13:00:03.390356 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntx6p\" (UniqueName: \"kubernetes.io/projected/437ae1ac-0699-44a2-9352-fe0a608b3d05-kube-api-access-ntx6p\") pod \"437ae1ac-0699-44a2-9352-fe0a608b3d05\" (UID: \"437ae1ac-0699-44a2-9352-fe0a608b3d05\") " Jan 29 13:00:03 crc kubenswrapper[4840]: I0129 13:00:03.390520 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/437ae1ac-0699-44a2-9352-fe0a608b3d05-config-volume\") pod \"437ae1ac-0699-44a2-9352-fe0a608b3d05\" (UID: \"437ae1ac-0699-44a2-9352-fe0a608b3d05\") " Jan 29 13:00:03 crc kubenswrapper[4840]: I0129 13:00:03.390623 4840 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/437ae1ac-0699-44a2-9352-fe0a608b3d05-secret-volume\") pod \"437ae1ac-0699-44a2-9352-fe0a608b3d05\" (UID: \"437ae1ac-0699-44a2-9352-fe0a608b3d05\") " Jan 29 13:00:03 crc kubenswrapper[4840]: I0129 13:00:03.391531 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/437ae1ac-0699-44a2-9352-fe0a608b3d05-config-volume" (OuterVolumeSpecName: "config-volume") pod "437ae1ac-0699-44a2-9352-fe0a608b3d05" (UID: "437ae1ac-0699-44a2-9352-fe0a608b3d05"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 13:00:03 crc kubenswrapper[4840]: I0129 13:00:03.399312 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/437ae1ac-0699-44a2-9352-fe0a608b3d05-kube-api-access-ntx6p" (OuterVolumeSpecName: "kube-api-access-ntx6p") pod "437ae1ac-0699-44a2-9352-fe0a608b3d05" (UID: "437ae1ac-0699-44a2-9352-fe0a608b3d05"). InnerVolumeSpecName "kube-api-access-ntx6p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 13:00:03 crc kubenswrapper[4840]: I0129 13:00:03.401134 4840 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/437ae1ac-0699-44a2-9352-fe0a608b3d05-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "437ae1ac-0699-44a2-9352-fe0a608b3d05" (UID: "437ae1ac-0699-44a2-9352-fe0a608b3d05"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 13:00:03 crc kubenswrapper[4840]: I0129 13:00:03.492367 4840 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntx6p\" (UniqueName: \"kubernetes.io/projected/437ae1ac-0699-44a2-9352-fe0a608b3d05-kube-api-access-ntx6p\") on node \"crc\" DevicePath \"\"" Jan 29 13:00:03 crc kubenswrapper[4840]: I0129 13:00:03.492845 4840 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/437ae1ac-0699-44a2-9352-fe0a608b3d05-config-volume\") on node \"crc\" DevicePath \"\"" Jan 29 13:00:03 crc kubenswrapper[4840]: I0129 13:00:03.492926 4840 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/437ae1ac-0699-44a2-9352-fe0a608b3d05-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 29 13:00:04 crc kubenswrapper[4840]: I0129 13:00:04.004221 4840 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" event={"ID":"437ae1ac-0699-44a2-9352-fe0a608b3d05","Type":"ContainerDied","Data":"5c8a05cad8247b370f3c421c1707e3c3bdbf6df48396207588353175d02e21ff"} Jan 29 13:00:04 crc kubenswrapper[4840]: I0129 13:00:04.004630 4840 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c8a05cad8247b370f3c421c1707e3c3bdbf6df48396207588353175d02e21ff" Jan 29 13:00:04 crc kubenswrapper[4840]: I0129 13:00:04.004244 4840 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29494860-b8h8m" Jan 29 13:00:04 crc kubenswrapper[4840]: I0129 13:00:04.395561 4840 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc"] Jan 29 13:00:04 crc kubenswrapper[4840]: I0129 13:00:04.400738 4840 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29494815-8fqzc"] Jan 29 13:00:05 crc kubenswrapper[4840]: I0129 13:00:05.016364 4840 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac3fae12-df97-4e5b-85a9-a609f7291cdb" path="/var/lib/kubelet/pods/ac3fae12-df97-4e5b-85a9-a609f7291cdb/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136654750024461 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136654751017377 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136646007016515 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136646010015457 5ustar corecore